don’t ask, don’t tell…

Posted on Friday 19 September 2014

It looks as if the Windy City of Chicago is taking the lead in figuring out if people are depressed without even asking them. Dr. Robert Gibbons of the University of Chicago is planning an an iPhone App that lets the cloud figure it out [insider trading…], and now Dr. Eva Redei of Northwestern’s Feinberg School of Medicine adds a blood test to the mix, launched in the pages of Time Magazine, no less, …
TIME Magazine
Health Mental Health/Psychology
by Alice Park
Sept. 16, 2014

As with any disease, detecting depression early is critical for reducing suffering and for finding an effective course of treatment. Now, in a study released Tuesday, scientists led by Eva Redei at Northwestern Medicine say it may be possible to test for depression in the blood—and figure out which patients will benefit most from behavior-based therapy as a treatment.
First blood test to diagnose depression in adults
Date: September 17, 2014
Source: Northwestern University
Summary: The first blood test to diagnose major depression in adults has been developed, providing the first objective, scientific diagnosis for depression. The test also predicts who will benefit from cognitive behavioral therapy, offering the opportunity for more effective, individualized therapy. The test also showed the biological effects of the therapy, the first measurable, blood-based evidence of the therapy’s success and showed who is vulnerable to recurring episodes of depression…
But that’s not all. If you’ve been treated with Cognitive Behavior Therapy, either face-to-face or on the telephone, and you don’t know if it helped, their blood test will answer the question for you [or at least it panned out in the 64 subjects where they tried it]:
by E E Redei, B M Andrus, M J Kwasny, J Seok, X Cai, J Ho and D C Mohr
Translational Psychiatry. 2014 4:e442.
Published online 16 September 2014

An objective, laboratory-based diagnostic tool could increase the diagnostic accuracy of major depressive disorders [MDDs], identify factors that characterize patients and promote individualized therapy. The goal of this study was to assess a blood-based biomarker panel, which showed promise in adolescents with MDD, in adult primary care patients with MDD and age-, gender- and race-matched nondepressed [ND] controls. Patients with MDD received cognitive behavioral therapy [CBT] and clinical assessment using self-reported depression with the Patient Health Questionnaire–9 [PHQ-9]. The measures, including blood RNA collection, were obtained before and after 18 weeks of CBT. Blood transcript levels of nine markers of ADCY3, DGKA, FAM46A, IGSF4A/CADM1, KIAA1539, MARCKS, PSME1, RAPH1 and TLR7, differed significantly between participants with MDD [N=32] and ND controls [N=32] at baseline [q< 0.05]. Abundance of the DGKA, KIAA1539 and RAPH1 transcripts remained significantly different between subjects with MDD and ND controls even after post-CBT remission [defined as PHQ-9 <5]. The ROC area under the curve for these transcripts demonstrated high discriminative ability between MDD and ND participants, regardless of their current clinical status. Before CBT, significant co-expression network of specific transcripts existed in MDD subjects who subsequently remitted in response to CBT, but not in those who remained depressed. Thus, blood levels of different transcript panels may identify the depressed from the nondepressed among primary care patients, during a depressive episode or in remission, or follow and predict response to CBT in depressed individuals.
You can read the full paper in Translational Psychiatry, a peer reviewed, on-line only journal where the authors paid the price of admission:
Translational Psychiatry levies the following Article-Processing Charges [APC] [plus local taxes where applicable] per article accepted for publication. As the Creative Commons Attribution 3.0 Unported License grants greater end user rights including commercial reuse, papers published under this license will be charged at a premium APC. For more information on this license please see the press release.
FEE SCHEDULE
License Type Original Article Correspondence
…Noncommercial-Share Alike… £2,200/$3,600/€2,600 £670 /S1,000/€800
…Noncommercial-No Derivative… £2,200/$3,600/€2,600 £670/S1,000/€800
Commercial reuse £2,400/£3,900/€2,800 £700/$1,100 /€900
Mickey @ 2:23 PM

still…

Posted on Friday 19 September 2014

 

Mickey @ 12:22 AM

a coup d’état…

Posted on Wednesday 17 September 2014

One of the great lessons of psychotherapy is the importance of formulating good questions along the way, even though the answers are often not available for a while. If the question is already on the table, when the answer happens to come along, you don’t miss it. It’s like finding that last piece of a vacation picture puzzle that fell on the patterned rug. One of those questions just sort of floating around in the background for me has been why the European Medicines Agency has been so sensible about Data Transparency. Nobody else has. They decided to release the data on approved drugs in 2010, and did just that until they were stopped by the courts [the AbbVie/InterMune suits]. Were they just good guys? I knew nothing of that story until this morning when I happened onto the bruhaha discussed in the last post [abuzz over there…]. Another question, what was the story behind Tamiflu? How did that epic get to be the the definitive battlefield for the issue of Data Transparency? I was a late comer to this party, and didn’t know any of that history.

Reading along this morning, I found that the efficacy of the drugs being stockpiled for a potential epidemic was an older question than I knew. Back then, the European Medicines Agency was under the Directorate-General for Enterprise and Industry! No wonder the approval of medications was weighted towards Industry! So when the last new administration took over in the European Union in 2009 [on a five year cycle], moving the EMA to the Health Commissioner was a reform move of some note:

    Health technology and pharmaceutical policy was transferred to the Health Commissioner in 2009 to, among other reasons, facilitate emergency preparedness [in response to the emergence of H1N1 and the alleged inability of DG Enterprise to provide the necessary leadership and coordination of vaccines] and to harmonise pharmaceutical governance along the lines of Europe’s Member States, all of whom manage pharmaceutical policy under their Ministries of Health.
    EPHA press release, September 14, 2014

Looking back five years:

    José Manuel Barroso, President of the European Commission, has announced the portfolio responsibilities for the next Commission. The new Health and Consumer Policy portfolio will now include responsibility for pharmaceutical products and medical devices. These changes will be reflected at the level of the directorate generals, with the Pharmaceutical Products and Cosmetics and Medical Devices Units F.2 and F.3, and consequently the European Medicines Agency, moved from DG Enterprise and Industry [ENTR] to DG Health and Consumers [SANCO].

    "We are certain that this governance change puts public interests and the health of Europeans at the centre of vital decisions affecting our health. With the responsibility for pharmaceutical and medical devices policies and for the European Medicines Agency too, the Health and Consumer Policy Commissioner is now better equiped to lead a consistent and coherent approach to public health policy and more specifically to ensure protection of patients and safety of medicines throughout the European Union.", said Monika Kosinska, Secretary General of the European Public Health Alliance. Moreover, "This bold decision by President Barroso demonstrated the power of political leadership and enables the European Commission to fulfill its Treaty responsibility as the guardian of public health.", she added.

    EPHA press release, November 27, 2009

And the EMA Data Transparency policy followed that change and was a response to the whole question of the stockpiled flu medications. Was it really effective? was on the front burner. They were good guys after all, but it was in response to something – suspected interference in the process of drug approval by industry and an industry friendly governmental agency. Here, Fiona Godlee, BMJ editor picks up the story:

    In 2010, the EMA announced a proactive policy on transparency and public access to the data on which the Agency bases recommendations that directly affect the health of European citizens. This policy was in line with an inexorable international trend towards greater transparency and with what MEPs and the European Council wanted to achieve as part of the new regulations governing clinical trials.

Their good-guy-ness was an end to a story, not the beginning. And the recent waffling on Data Transparency [fine summary…, a long hot summer…] came as something of a shock to people who knew the story:

    To the astonishment of observers and of activists who had been deeply involved in this positive development, the EMA made a spectacular retreat in recent months, coinciding with the arrival of the new head of its legal service [whose background was in the pharmaceutical industry]. The EMA explained its change of heart by saying that it had to take into account the Commission’s position in ongoing negotiations on transatlantic trade.

But now it has a more ominous look and feel. It appears to be a part of another story that we didn’t know was coming – putting the European Medicines Agency back in the hands of the industrial arm of the EU. That’s the kind of background story that has probably been being worked on since 2009 by whatever forces the EFPIA and the pharmaceutical industry could muster [which I expect were impressive]. Campaign contributions? Who knows? Godlee continues:

    An incomprehensible step backwards. Your decision to transfer responsibility for drugs policy and the EMA back to the Directorate-General for Enterprise and Industry is a great disappointment, which none of the groups concerned about public health in Europe can understand. What are the reasons for this step backwards? What will it mean for the future direction of the EMA and for European patients?
    Fiona Godlee’s letter, September 16, 2014

Googling this issue as the sun came up this morning came up empty. But tonight, there’s plenty to read. It all essentially says "WTF?!" and it all seems to reach the same conclusion I’ve reached – that this is a behind the scenes reclaiming of dominion over the drug approval process by the pharmaceutical industry, long in the making – a coup d’état
Mickey @ 11:00 PM

abuzz over there…

Posted on Wednesday 17 September 2014

Something’s up in Europe that has to do with Data Transparency. I don’t know enough about how the European Union works to understand it, but here’s why I say that something’s buzzing:
New York Times
By JAMES KANTER
SEPTEMBER 10, 2014

BRUSSELS — Jean-Claude Juncker, the incoming president of the European Union’s executive branch, nominated his team on Wednesday, which will face challenges including reviving the flagging euro-area economy, curbing reliance on Russian energy and controlling migration into the bloc. The nominations come as part of the changes in European Union governance that occur every five years, and follow the election in May of a new European Parliament. Mr. Juncker’s nominees are subject to approval by the Parliament, which is expected to hold hearings on each appointee — but can only accept or reject the entire slate, rather than approve some nominations while blocking others…
And here’s why I say it has to do with Data Transparency and the soon coming EMA Data Transparency decision that was postponed in July [see a long hot summer…]. The editorial below appeared in the BMJ on Monday. Unfortunately, it’s behind their paywall, but I purloined the pieces that seems to relate to Data Transparency:
A victory for profits over public health?
by Martin McKee and Paul Belcher
British Medical Journal. 2014 349:g5671 [Published 15 September 2014]

News of the allocation by the European Commission’s president elect, Jean-Claude Juncker, of portfolios to the remaining European Commissioners has been eagerly awaited. This is an intensely political process, with the largest member states vying among themselves for the main economic roles. Would Juncker follow established precedent, giving health a low priority by allocating the portfolio to one of the countries with the smallest economies and thus, in effect, least political power in the European Union? … Or would he break with precedent and allocate it on the basis of merit, choosing the commissioner whose expertise and experience most closely matched the role? This time it seemed at least possible that Juncker would choose the second course, given that the line up included a very experienced candidate, Vytenis Andriukaitis, a Lithuanian surgeon who had been active in his country’s struggle for independence and gone on to become a successful health minister. European health policy experts were delighted when Juncker chose this course; subject to approval by the European Parliament, Andriukaitis will take up his role in November 2014.

Yet very shortly afterwards it became clear that the news was not all that it seemed. The allocation of jobs was accompanied by a restructuring of the directorates general [DGs]—in effect the ministries of the European Commission—without any explanation for the change. Three of the key units within the directorate general for health and consumers [DG SANCO] were to be transferred to the directorate general for the internal market, industry, and entrepreneurship [DG ENTR]. These were the units responsible for medicinal products and medical devices, including authorisations; that for quality, safety and efficacy; and the European Medicines Agency. This move reversed changes made by the previous European Union president, José Manuel Barroso, who in 2009 had brought these responsibilities within DG SANCO after a campaign led by pan-European health and consumer groups. Then, DG ENTR was seen as unable to provide the necessary leadership in the face of H1N1 influenza, especially in respect to emergency preparedness. The 2009 reorganisation also reflected a recognition of the need to harmonise the European Commission’s structures with those of its member states, where policies on pharmaceutical quality and safety were typically led by health ministries.

Inevitably, the latest change has provoked a furious response among health policy experts. Peggy Maguire, president of the European Public Health Alliance, described it as a “potential disaster,” arguing that Juncker was placing health concerns in “second place to appeasing big business.” In the same vein Monique Goyens, director general of the European Consumer Organization, said, “This shift gives European consumers the signal that economic interests come before their health.” In contrast, the body representing the European drug industry was delighted. Richard Bergstrom, director general of the European Federation of Pharmaceutical Industries and Associations, welcomed the move, expressing his “trust [that] the people in these units have the integrity to continue to put patient safety first. Critics of the transfer have three main objections…
A third concern relates to the secrecy of the drug authorisation process, as described in great detail by Ben Goldacre in his book Bad Pharma. Glenis Willmott, a British member of the European Parliament, recalled, “When I was negotiating the transparency laws for clinical trial results, it was DG Enterprise that wanted to water the rules down. Now they will be overseeing the European Medicines Agency as it implements the transparency regime, which is frankly concerning”

And then yesterday, Fiona Godlee [BMJ Editor] and Bruno Toussaint of Prescrire sent this letter to President Juncker:

16 September 2014

Prescrire and the British Medical Journal send a letter to Jean-Claude Juncker, President of the European Commission

To Jean-Claude Juncker, President of the European Commission

Dear Mr Juncker,

In 2009, President Barroso finally decided to place Commission policy on health products [medicines and medical devices] and the European Medicines Agency [EMA] under the responsibility of the Directorate-General for Health and Consumers rather than the Directorate-General for Enterprise and Industry. We and many others publicly welcomed this long-requested reform. It constituted an important guarantee that priority had been given to public health and patients rather than short-sighted economic and corporate interests.

An incomprehensible step backwards. Your decision to transfer responsibility for drugs policy and the EMA back to the Directorate-General for Enterprise and Industry is a great disappointment, which none of the groups concerned about public health in Europe can understand. What are the reasons for this step backwards? What will it mean for the future direction of the EMA and for European patients?

We have been watching the EMA’s work closely since its inception [1995]. It constantly and forcefully defends the interests of industry. Little attention is paid to the views of patients and health professionals. If the EMA’s policies are to be balanced, it must be more receptive to patients’ interests and the protection of public health. We will cite just one recent example.

In 2010, the EMA announced a proactive policy on transparency and public access to the data on which the Agency bases recommendations that directly affect the health of European citizens. This policy was in line with an inexorable international trend towards greater transparency and with what MEPs and the European Council wanted to achieve as part of the new regulations governing clinical trials. To the astonishment of observers and of activists who had been deeply involved in this positive development, the EMA made a spectacular retreat in recent months, coinciding with the arrival of the new head of its legal service [whose background was in the pharmaceutical industry]. The EMA explained its change of heart by saying that it had to take into account the Commission’s position in ongoing negotiations on transatlantic trade.

Put public health first. Experience has shown that the interests of pharmaceutical companies only coincide with the interests of public health when companies are encouraged to focus on real and pressing health needs, are obliged to evaluate their drugs sufficiently, and when their marketing activities are monitored. Too many medicines introduced on the European market offer no tangible therapeutic advantages for patients, and some are inferior to existing treatments.

President Juncker, bringing the EMA and industry even closer together endangers the health of European citizens. We feel that your decision was probably taken under the influence of commercial vested interests, but you still have the opportunity to take a better, more forward-looking decision, informed by a full understanding of the best interests of European citizens and their health.

We are counting on you and the world is watching.
    Fiona Godlee, editor in chief of the British Medical Journal
    Bruno Toussaint, director of la revue Prescrire

I can’t find any way in the world to read all of this without an overwhelming suspicion that it represents a behind the scenes move by the pharmaceutical industry [EFPIA]. But that aside, as the BMJ editorial says, it’s "A victory for profits over public health!"

In addition, here’s a news report from AllTrials, and a press release and letter from the European Public Health Alliance
Mickey @ 7:05 AM

at the wrong meeting…

Posted on Tuesday 16 September 2014

In the Neuroanatomy I took in the 1960s, we learned the tracts of the peripheral nerves, the spinal cord, the cranial nerves, the motor/sensory systems. It was like the wiring diagrams of my stero-building, ham radio days and I took to it like a duck to water. It was aimed at clinical neurology and grand fun. There was lots of other stuff in the brain with mysterious structure or not, and it got phrases like "thought to be involved with emotion" [but it was never on the tests]. Most of the neurophysiology and talk about neurotransmitters had to do with peripheral nerves and pharmacologic agents [like poisons or blood pressure pills]. If we talked about neurotransmitters [or much of anything else] in the grey-matter, I don’t recall it. That chatter came ten years later in psychiatry, where no drug lecture was complete without what I call the "NIH memorial nerve ending" slide, recycling neurotransmitters of various ilks [later, I had such a slide of my very own].

When psychiatry proper exited the mind business in 1980, it gradually became almost obligatory in conversation to insert the word s "brain diseases" in place of "mental illness" in many circles and there were many versions of Dr. Insel’s "Psychiatry is Clinical Neuroscience" along the way. The group I described as the hanger-on-KOL-breakthrough-freaks in the last post spent an enormous fraction of our research money not just on drug studies, but also on various forrays using the new tools of neuroscience [neuroimaging, genetics, etc] to the detriment of more perspecacious neuroscientists. They skipped steps under the banner of "translational medicine" in a race to product. A lot of the problem Neuroskeptic talks about in this piece came from certain academic psychiatrists who became self-proclaimed neuroscientists – naively chasing various pots of gold:
Discover
By Neuroskeptic
September 16, 2014

This is the abstract for one of the two talks that I gave last week in Búzios, Brazil for the SBNeC conference: “Why Is It So Hard To Think About The Brain?”…

Today, we are thinking – and talking – about the brain more than ever before. It is widely said that neuroscience has much to teach psychiatry, cognitive science, economics, and others. Practical applications of brain science are proposed in the fields of politics, law enforcement and education. The brain is everywhere. This “Neuro Turn” has, however, not always been accompanied by a critical attitude. We ought to be skeptical of any claims regarding the brain because it remains a mystery – we fundamentally do not understand how it works. Yet much neuro-discourse seems to make the assumption that the brain is almost a solved problem already.

For example, media stories about neuroscience commonly contain simplistic misunderstandings – such as the tendency to over-interpret neural activation patterns as practical guides to human behavior. For instance, recently we have heard claims that because fMRI finds differences in the brain activity of some violent offenders, this means that their criminal tendencies are innate and unchangeable – with clear implications for rehabilitation. Neuroscientists are well aware of the faults in lay discourse about the brain – and are increasingly challenging them e.g. on social media. Unfortunately, the same misunderstandings also exist within neuroscience itself. For example, I argue, much of cognitive neuroscience is actually based on [or, only makes sense given the assumption that] the popular misunderstanding that brain activity has a psychological ‘meaning’.

In fact, we just do not know what a given difference in brain activity means, in the vast majority of cases. Thus, many research studies based on finding differences in fMRI activity maps across groups or across conditions, are not really helping us to understand the brain at all – but only providing us with a canvas to project our misunderstandings onto it. Why do these errors arise? I  see the origin of these misunderstandings as being a fundamental difficulty that we have in thinking about “the brain”. We are misled by an implicit mind-brain dualism that leads us astray. The problem is not so much that is difficult to find answers about the brain, but rather that it is easy to ask the wrong questions.

What can neuroscientists do? There are solutions. Neuroscience should be based on clear, falsifiable hypotheses that seek to explain, rather than merely describe, neural and behavioral phenomena. The process of framing testable hypotheses helps us to escape from the misunderstandings. Much of neuroscience already does this, but not enough. As for the public and the media, it is important to remember that misunderstandings of neuroscience can have serious consequences, both directly and indirectly. Generally these errors consist in seeing brain science as more certain than it is, and giving it more authority than it should do in real life scenarios. This lends undeserved power and influence to ideas [e.g. political ideas] that wouldn’t otherwise seem convincing. Public discourse moves this away from rationality towards the appearance of rationality.

In conclusion, both neuroscientists and the public are subject to the same misunderstandings when trying to think about the brain. Neuroscience can advance when it is based more on formal hypothesis testing and less on intuition and interpretation. But the latter approach is common. And in the public sphere it can be dangerous.
At this point, I could go off on a diatribe about the difference between the Dogmatists and the Skeptics in ancient Greece. In fact, maybe I’ll do just do that. I suppose if you have a favorite Greek philosopher, you can talk about him more than once. So I’ll borrow from a short former accounting for my diatribe·let:
    Pyrrho [Πuρρων, c. 360 BC – c. 270 BC]:
    Pyrrho was from Elis, a suburb of Athens. He started life as a painter, but gravitated to Philosophy. He became one of the Philosophers that traveled with Alexander the Great on his campaigns of conquest and he was influenced by the Philosophers in the East that he met on those travels. When he returned to Greece, the dominant school of Philosophy was Dogmatism. We know a jaded version of Dogmatism, largely from the excesses of the Catholic Church centuries later. At the time of Pyrrho, Dogmatism was something  lofty, like the "search for absolute truth." Pyrrho taught that there was no absolute truth, and his teachings became known as Skepticism. What we know of Pyrrho outside later writings about his philosophy are stories that we know aren’t true. They’re the jokes the Dogmatists made about him – parodies of his indecisiveness. They told the story that Pyrrho was walking down the road and saw a man fall face down in the mud. While Pyrrho pondered, the man died from asphyxiation. Or a story that his students followed him everywhere to make sure that he decided to eat [those old Greeks weren’t so great with jokes]. Since there were no absolute truths, Pyrrho taught that we had to accept relative truth, always maintaining a questioning attitude, vigilant for things that might cause us to revise our former approximations. We might call it healthy skepticism these days, and it’s the essence of the scientific method.
As I read through Neuroskeptic’s essay, I kept thinking that he was at the wrong meeting. I don’t know anything about Brazilian Neuroscience, but my guess is that his audience of neuroscientists was well schooled in the skepticism of the scientific method. Where this lecture should’ve been delivered was to the American Psychiatric Association. And while he was at it, a symposium for the grant reviewers at the N.I.M.H. would be in order. Acting under the guise of Translational Medicine, the NIMH has funded any number of beyond speculative forrays into the land of neuroscience – grants to prove acquired traumatic illness is genetic, $35 M to sequence antidepressants, finding biomarkers to predict who will respond Zoloft vs Wellbutrin, neuroimaging anything that moves. It’s our psychiatric KOLs who need to hear the pearls of wisdom from a real neuroscientist like Neuroskeptic:

  • We ought to be skeptical of any claims regarding the brain because it remains a mystery – we fundamentally do not understand how it works.
  • media stories about neuroscience commonly contain simplistic misunderstandings – such as the tendency to over-interpret neural activation patterns as practical guides to human behavior.
  • much of cognitive neuroscience is actually based on [or, only makes sense given the assumption that] the popular misunderstanding that brain activity has a psychological ‘meaning’.
  • … many research studies based on finding differences in fMRI activity maps across groups or across conditions, are not really helping us to understand the brain at all – but only providing us with a canvas to project our misunderstandings onto it.
  • … misunderstandings of neuroscience can have serious consequences, both directly and indirectly. Generally these errors consist in seeing brain science as more certain than it is, and giving it more authority than it should do in real life scenarios.
Mickey @ 9:44 PM

short-list?…

Posted on Tuesday 16 September 2014


by Javier Arnedo, M.S.; Dragan M. Svrakic, M.D., Ph.D.; Coral del Val, Ph.D.; Rocío Romero-Zaliz, Ph.D.; Helena Hernández-Cuervo, M.D.; Molecular Genetics of Schizophrenia Consortium; Ayman H. Fanous, M.D.; Michele T. Pato, M.D.; Carlos N. Pato, M.D., Ph.D.; Gabriel A. de Erausquin, M.D., Ph.D.; C. Robert Cloninger, M.D., Ph.D.; and Igor Zwir, Ph.D.
American Journal of Psychiatry. Published in advance on September 15, 2014.

Objective: The authors sought to demonstrate that schizophrenia is a heterogeneous group of heritable disorders caused by different genotypic networks that cause distinct clinical syndromes.
Method: In a large genome-wide association study of cases with schizophrenia and controls, the authors first identified sets of interacting single-nucleotide polymorphisms [SNPs] that cluster within particular individuals [SNP sets] regardless of clinical status. Second, they examined the risk of schizophrenia for each SNP set and tested replicability in two independent samples. Third, they identified genotypic networks composed of SNP sets sharing SNPs or subjects. Fourth, they identified sets of distinct clinical features that cluster in particular cases [phenotypic sets or clinical syndromes] without regard for their genetic background. Fifth, they tested whether SNP sets were associated with distinct phenotypic sets in a replicable manner across the three studies.
Results: The authors identified 42 SNP sets associated with a 70% or greater risk of schizophrenia, and confirmed 34 [81%] or more with similar high risk of schizophrenia in two independent samples. Seventeen networks of SNP sets did not share any SNP or subject. These disjoint genotypic networks were associated with distinct gene products and clinical syndromes [i.e., the schizophrenias] varying in symptoms and severity. Associations between genotypic networks and clinical syndromes were complex, showing multifinality and equifinality. The interactive networks explained the risk of schizophrenia more than the average effects of all SNPs [24%].
Conclusions: Schizophrenia is a group of heritable disorders caused by a moderate number of separate genotypic networks associated with several distinct clinical syndromes.
As much as I like to talk about Data Transparency and try to demystify some of the calculations used in the Clinical Trials, when it comes to the modern genetic studies, I’m lost at sea. They’re doing calculus and I’m still on the multiplication tables just over counting on my fingers. I know that GWAS stands for genome-wide association study, and I sort of know what a SNP [single-nucleotide polymorphism] is. Beyond that, I’m only dimly able to understand how a given study was conducted and what the authors make of their findings, but not able to speak to the quality of the work. So in this case, I’ll say what a SNP is, because these geneticists can talk of little else, and if you talk genetics, you have to throw in SNP [pronounced "snip"] every so often to qualify yourself. Remember that the genome [the sequence of nucleotides on the chromosomes] fills a pretty large cabinet and makes for some really monotonous reading.

A snip is a place where some segment on the genome differs among individuals or between paired chromosomes by only a single nucleotide. These areas with only one nucleotide difference are easily found and point to areas of variation within and between individuals. Many classic genetic disorders are caused by SNPs [Sickle Cell Anemia, Thalassemia, Cystic Fibrosis, etc]. That’s it for what I know…

The reason for mentioning this study is that this group took what seems to me to be a unique tack in analyzing the data. They identified SNP groupings that were associated with Schizophrenia compared to controls, identifying genotypes [SNP "sets"] that carried a high risk. But then they did something new. They queried a database of Schizophrenic patients who had been in studies where there was a lot of clinical information [eg C.A.T.I.E.]. They classified those cases by phenotype [clinical characteristics].

Then they tested the different genotypes [SNP "sets"] against the phenotypes [separated clinically] and found correlations that lead them to conclude that "Schizophrenia is a group of heritable disorders caused by a moderate number of separate genotypic networks associated with several distinct clinical syndromes."

The classification of clinical cases they used was way more complicated that that of the days of Kraepelin or Bleuler, and nothing like the DSM-anything. And their method of correlating genotype [genetic SNP sets] with phenotypes [their clinical classification] was no simple spreadsheet, but rather a complex "architecture" [see their figure above]. It says that Schizophrenia is genetic – that it  is not an entity, but many entities [see a guest post from Sandy Steingard…] – "the schizophrenias." The study implies says that the clinical presentation and course of these clinical syndromes define unique "diseases," each with a physical basis. And that’s a mouthful.

A couple if months ago, there was another Schizophrenia genetic study in Nature that got wide attention [Biological insights from 108 schizophrenia-associated genetic loci] [see Director’s Blog: Mapping the Risk Architecture of Mental Disorders]. The recent write-up of that study in the PsychiatricNews [Dozens of Schizophrenia Risk Loci Identified] was effusive, making analogies to the spires of Dubai and peppered with guest expert speculations. On the other end of the spectrum, Joanna Moncrieff [Critical Psychiatry Network and Mad-in-America] had a different take [A critique of genetic research on schizophrenia – expensive castles in the air], pointing to the massive spending on genetics research with little likely practical yield compared to the lack of attention to the psychosocial needs of these patients. I envy the clarity of such dichotomous convictions. Alas, I always find myself in the middle [in their lifetimes…], and want to insert an AND for every EITHER/OR into these controversies.

For forty plus years, I’ve seen those slides and figures in books saying that the psychoses have a strong genetic component – and clinical experience has born that out with patients, particularly with Manic-Depressive Illness [classic cases rather than the modern bastardizations]. But just knowing the familial patterns is a factor in diagnosis rather than treatment. For what it’s worth, I believe it, in spite of being of a psychological bent as a clinician. And like almost everyone, when they cracked the genome, I wondered not-if-but-when the genetic associations would get clarified. The early attempts were disappointing. I think the DSM-5 Task Force was counting on such breakthroughs to justify their "biologocal" classification scheme – and the research just didn’t come through, much to their chagrin.

But the geneticists with their gaggles of data and armies of researchers seem to be refining their techniques and creativity in looking at this problem of complex inheritance in psychiatric disorders. I expect the serious scientists are doing cartwheels over these recent results [and I cringe in anticipation of the response of the hanger-on-KOL-breakthrough-freaks]. But if this study holds up under scrutiny, it might just be Stockholm short-list material…

Update: It’s fitting that this study came from Washington University in Saint Louis, epicenter for the 1980s neoKraepelinian revolution. Here’s an interview with author Dr. Cloninger [Schizophrenia not a single disease but multiple genetically distinct disorders]. No equivocation on his part…
Mickey @ 7:29 AM

the other guy…

Posted on Sunday 14 September 2014

The gist of this posting by the APA president is a complaint that the current SAMHSA Strategic Plan leaves out the medical specialty of psychiatry. And he correctly notes the influence of the Recovery movement in the SAMHSA document:
From the President
PsychiatricNews
by Paul Summergrad
September 11, 2014

… At the same time, when looked at from the outside — from the perspective of observation and syndromic coherence — these disorders are highly correlated with genetic and neurobiologic abnormalities as well as disruptive environmental events. Two recent studies [among many] — one on the genetics of schizophrenia based on the largest pool of genomewide studies and a smaller study of the genetic and neuropathologic basis of autism — reinforce this understanding.

Why is the tension between the experience of these illnesses and their etiology so important? In part because it is easy for psychiatry, and more broadly the mental health community, to come down strongly on one side or the other of this divide. This split can influence the research and public policy we need to develop treatments for mental illness and provide access to care.

Moreover, when this divide — which is sometimes inaccurately framed as a battle between the recovery community and a misunderstood “medical model” — affects the Substance Abuse and Mental Health Services Administration [SAMHSA], the impact can be of even greater significance. SAMHSA is the principal federal agency dedicated to leading public-health efforts to improve mental health and reduce the impact of mental illness, including substance abuse, on America’s communities. As a central component of its directive, SAMHSA recently released a draft of its FY 2015-2018 strategic plan, titled “Leading Change 2.0: Advancing the Behavioral Health of the Nation”.

There is much to support in SAMHSA’s draft. Efforts to reduce disparities in access to care, which disproportionately afflict minority communities, are laudable, as are efforts to reduce the number of those with mental illness in the criminal justice system. However, for an agency with such a broad responsibility, the proposed plan is striking for what it leaves out: a focus on the appropriate medical care of patients with serious mental illness and the development of a physician workforce that is essential for their care. In APA’s letter to SAMHSA Administrator Pamela Hyde, J.D., responding to the draft strategic plan, our CEO and medical director, Saul Levin, M.D., M.P.A., noted, “APA is strongly concerned about the lack of explicit recognition of the psychiatric treatment needs for Americans suffering from mental illness and substance use disorders, and in particular for the 13 million Americans who suffer from debilitating serious mental illnesses [SMI].” In addition, we urged SAMHSA to develop explicit goals for evidence-based medical care for serious psychiatric illnesses…
I like Dr. Summergrad. He doesn’t lead with the arrogance of his recent predecessors, doesn’t make assumptions about the primacy of psychiatry in the mental health cosmology, and while he wears a white coat – he wears it loosely. But it will take more than a long absent humility to achieve "explicit recognition of the psychiatric treatment needs for Americans suffering from mental illness and substance use disorders, and in particular for the 13 million Americans who suffer from debilitating serious mental illnesses" primarily because for thirty years, psychiatry has itself had a monocular view of those needs. He speaks against SAMHSA coming down on one side of "the divide," yet psychiatry itself has helped to create and both actively and passively nurtured the division.

Absent some lofty rhetoric along the way, psychiatry proper has offered the same treatment option for chronic psychosis since the arrival of Thorazine – antipsychotic medication maintenance – and ignored the social problems.  While decrying the long-term effects of medications, the recommendations haven’t changed to take that into account. Psychiatry’s other offering has been an expensive and non-productive research effort to chase down a biological etiology and/or new biological treatments. In practical terms, that effort has yielded nothing. During this period, the deinstitutionalization of mental patients has resulted in the reinstitutionalization of mental patients [what they’re for…, justification for “what they’re for”…], again often decried in rhetoric, but otherwise unaddressed. So the search for etiology and for new treatments has failed and the actual fate of these patients has been ignored.

The Recovery Movement arose outside of psychiatry, and is premised on the idea that an intense focus on the interpersonal, societal, and cultural needs of the chronically mentally ill will lead to recovery. In this model, traditional diagnosis takes a back seat [or is considered detrimental]. It is the official approach of SAMHSA [Substance Abuse and Mental Health Services Agency], focusing on block grants to the States for community programs following the Recovery model.

For many, the Recovery Movement has the additional meaning of recovering from psychiatry itself – hospitalization, antipsychotic medication, commitment, the ‘disease model’, the ‘medical model’, etc. And many of them feel that this request, "we urged SAMHSA to develop explicit goals for evidence-based medical care for serious psychiatric illnesses" is part of the problem rather than the solution. So the "divide" is increasingly an active process being fueled from both "sides" – often driven by intense ideological conviction. Meanwhile, as this controversy rages, resources dwindle and our prisons fill [see George Dawson’s recent Shut Down The Psychiatric Gulags – Don’t Build More!].

If there’s anything that can be counted on in this seemingly endless harangue, it’s that whatever anyone says about it, it’s mainly about what the other guy says or does being wrong. I doubt that discussions of this topic can or will be meaningful until there’s a clear consensus that no-one really knows quite what to do at this point. Until then, the dialog will continue to be about the other guy, and like a pendulum, any balance point will be a virtual position seen only in passing…
Mickey @ 10:18 PM

beyond not inert…

Posted on Saturday 13 September 2014

I kind of liked writing the last post [about my connectomes] and particularly the discussion that followed. I realize that over the recent years, I’ve written a lot about Clinical Trials, but my focus has been on the ways they’ve been misreported or distorted in the service of commerce. I’ve learned a lot about bias – for example Publication Bias [only publishing studies with the desired outcome]. I never realized the impact of leaving out negative studies. It’s analogous to omitting unwanted values in a single study – something you could never get away with. The more recent emphasis on meta-analyses has us looking at the family of studies as the data base rather than focusing on any single trial – highlighting the impact of Publication Bias.

Randomized [placebo-controlled, double-blinded] Clinical Trials [RCTs] became a requirement for FDA Approval of new drugs in 1962 following the days of Thalidomide – adding proof of efficacy to the FDA’s original charge of insuring safety. I don’t really know the timeline of how FDA Approval moved from meaning not inert to being an actual endorsement – how we came to be reading things like "Prozac is now FDA Approved for the treatment of Major Depressive Disorder" in pharmaceutical ads [and in the Financial Times]. The FDA standard, not inert, is hardly a reasonable clinical treatment standard. It’s a mathematical or a chemical standard – separation from placebo certified by probability estimates. And in more cases than we’d like to admit, even those reported probabilities were improbable. Over time, this reform move aiming to curb corruption became its major super·highway.

Randomized Placebo-Controlled, Double-Blinded Clinical Trials are intuitive. It makes perfect sense to look on a blinded comparison between drug versus no drug as the essence of evidence-based medicine. It’s hard to imagine an alternative. How did we even evaluate therapeutics prior to Lind’s comparison of citrus fruit to other treatments in Scurvy? One answer is that we went the other way. Instead of using the response of groups [as in a clinical trial] to inform the treatment of the individuals that come to our offices, we used what we learned from an individual to treat the groups that followed. My favorite disease is the one amateur carpenters get by hitting the thumb with a hammer. The blood under the nail causes excruciating pain, as does the drill kit in the ER used to release the pressure. Ever since an ER nurse showed me that burning a hole in the nail with an unfolded paper clip heated red hot was painless and got the job done, I’ve applied that treatment to every single subsequent case with stellar results. No Phase 3 Study required. It always works!

RCTs are really for evaluating safety, and then trying to sort out the real usefulness of treatments that work sometimes, or somewhat, or sometimes somewhat. I’ve been thinking the last couple of days that I might be winding up to go through some more of the math that people use to look at RTCs. It’s possible that something kind of amazing might be about to happen – data transparency. We’ve been thinking that if we had access to the actual data, we could make things like psycho·pharmacology into something right-sized. We’ve lived so long between a Scylla and a Charybdes level of black and white thinking, it’s interesting to ponder a world where the discourse is scientific rather than ideological, sectarian, financial, or just plain nasty.

A year ago, there was an article in the Lancet by Iain Chalmers, one of the founders of the Cochrame Collaboration, and Patrick Vallance, a President at GSK. It acknowledged helpful comments by Ben Goldacre in preparing the paper. They were talking about patient confidentiality in an era of data transparency, and suggested that putting "trialists" in the driver’s seat would be a solution because of their proven track record in protecting confidentiality:
by Patrick Vallance and Iain Chalmers
The Lancet 2013 382[9898]:1073 – 1074.

Publishing the results of all clinical trials, whoever funds them, is required for ethical, scientific, economic, and societal reasons. Individuals who take part in trials need to be sure that data they contribute are used to further knowledge, prevent unnecessary duplication of research, and improve the prospects for patients.

Endorsement of these principles is clear in the support received for the UK-based charitable trust Sense about Science’s campaign demanding that all clinical trials should be registered and reported. However, although the campaign recognises the advantages of analyses based on individual participant data (IPD), it is not calling for open access to IPD. The campaign recognises that risks to personal privacy must be taken seriously. These risks are not just theoretical: a recent study was able to identify 50 individuals from public websites that contained genetic information. The research community must work with others to define what constitutes appropriate protection of identifiable information if it is to retain public trust in the use of IPD.

Analyses based on IPD have many advantages. In 1970, The Lancet published a report based on nine trials of anticoagulant therapy after myocardial infarction. That study showed how, compared with analyses of aggregate data, access to IPD facilitated more thorough data checking; identified missing information; prompted renewed searches for key outcomes; enabled longitudinal analyses based on serial measurements in individuals; and offered greater reliability of subgroup analyses. Nearly two decades passed before others began to collaborate widely to use IPD analyses. These initiatives from collaborative trialists’ groups resulted in authoritative analyses of direct relevance to patient care in cancer and cardiovascular diseases, among others. The advantages of IPD analyses have prompted calls for wider access to such data, and we support these calls. However, robust arrangements are needed to minimise the risks of breaches of patient confidentiality. The experience gained within trialists’ collaborations is important, since, as far as we are aware, they have an unbroken record of maintaining patient confidentiality in their IPD analyses…
As much as I respect Drs. Chalmers and Goldacre, that article really pissed me off [the wisdom of the Dixie Chicks…]. It felt like a Trojan Horse to me, a way to derail data transparency by making the data available to an elite cadre. And, by the way, I don’t agree that "patient confidentiality" applies to Clinical Trials. They’re subjects, not patients and I resent using an honored medical ethic to hide important parts of clinical research. But that’s what I thought then. What I think now is something a bit different that’s closer to the emotional reaction I had to that paper when it came out.

Having done a research fellowship in hard science, I knew more than the average psychiatrist about research methods and statistics, but I didn’t raise the questions I should’ve during the heyday of the SSRI/Atypical feeding fest of the 1990s and beyond. I had other things to study up on and I left the academic authors whose names were on those bylines to provide us with an accurate literature, or at least an honest literature. They didn’t do that [in spades]. It’s not that I suspect Dr. Chalmers and the other "trialists" of being like our now infamous psychiatric KOLs who made conflict of interest a way of life. It’s that I think it was and is my our responsibility to keep up with not just the literature, but to have at least an ongoing working understanding of the scientific methodology driving it. No more delegating to the scientific elite. There were three prominent Department Chairmen on Senator Grassley’s COI list, one of whom was an APA president elect. It’s as simple as the saying, "Fool me once, shame on you. Fool me twice, shame on me."

Writing this blog has lead me to "bone up" on my statistics, and I feel comfortable with that part. But what Dr. Carroll called clinimetrics has been new territory for me and I’m self-taught with help where I can find it. I enjoyed writing about a piece of it last time – effect sizes. It was helpful to put what I thought down on paper, and really helpful to read the comments from people with a more experiential acquaintance or just an interest. I think I’ll do some more of that. Even if I get some of it wrong, it’ll help to have others get my wheels more on the tracks.

If the long sought data transparency finally does come our way, we need to know what to do with it. It’s important going forward, and equally important looking backwards. There’s a lot of flotsam and jetsam still floating around out there from the previous shipwrecks that has to do with drugs still being used by the buckets·full now they’re in the generic domain and affordable. Ben Goldacre famously said that "the best disinfectant is sunlight." But that’s only true if a lot of us know what we’re looking at…
Mickey @ 2:38 PM

about my connectomes

Posted on Thursday 11 September 2014

While I haven’t thought about it very much, I made a move from the hardest of medical sciences to the softest without any transition. The first time around was in a lab with scintillation counters printing data to punch cards to feed into Fortran programs that cranked out ANOVA with p values. And then I was in the world of psychotherapy where there was little in the way of a control group [or for that matter – any groups], and validation was subjective at best – only clinical. There’s a gulf there that seems like it needs some of those connectomes Dr. Insel loves to talk about. But apparently I’m not the only person around with that kind of connectome problem. Many of my colleagues seem to obsess about clinical trials and their p values without addressing what matters – clinical relevance. So what! if a drug group in a clinical trial statistically separates from the placebo group if the people treated don’t even notice the difference. Large groups don’t come to our offices, individuals come. So the question is how to take those numbers that are generated in a clinical trial and turn them into something that really matters to the patients and doctors that inhabit those offices. Likely almost anyone reading this already knows what I’m about to say, but I’m about to say it anyway. So this is that post you might want to skip…

This is the part I want to mention from Agomelatine efficacy and acceptability revisited: systematic review and meta-analysis of published and unpublished randomised trials. As I said above, it may be old hat to many, but it’s still on the growth edge of my own understanding – how to get those numbers into something that has to do with clinical relevance – something that matters to actual people:

    The present systematic review found that acute treatment with agomelatine is associated with a difference of 1.5 points on the HRSD. This difference was statistically significant, although the clinical relevance of this small effect is questionable. No research evidence or consensus is available about what constitutes a clinically meaningful difference in HRSD scores. Antidepressant research has recently faced the issues of [a] a large number of studies reporting negative findings and [b] a possible increase in placebo response rates, which may be caused by changes in selection of study participants and how studies are conducted. Such changes might contribute to a reduction in the likelihood of identifying drug effectiveness in antidepressant drug trials. However, even with this consideration in mind, it is plausible to agree with one of the agomelatine clinical trials that a difference of less than three HRSD points is unlikely to be clinically meaningful. Other publications have discussed a difference of two points as being clinically important, but the effect of agomelatine in our review was also below this threshold. Furthermore, it cannot be excluded that a 1.5-point difference may reflect a weak effect on sleep-regulating mechanisms rather than a genuine antidepressant effect.

    In a recent statement, the EMA Committee for Medicinal Products for Human Use [CHMP] pointed out that, in addition to a statistically significant effect in symptom scale scores, the clinical relevance has to be confirmed by responder and remitter analyses and that ‘… results in the short-term trials need to be confirmed in clinical trials, to demonstrate the maintenance of effects’. For dichotomous outcomes, agomelatine was not superior to placebo in terms of relapse and remission rates, but was statistically superior to placebo in terms of response rates. The difference in response rates corresponds to an absolute risk difference of 6% and to a number needed to treat [NNT] of 15. Based on an analysis of regulatory submissions, which found an average difference of 16% in the response rates between common antidepressants and placebo, EMA CHMP states that this difference ‘… is considered to be the lower limit of the pharmacological effect that would be expected in clinical practice.’ Other authors considered an NNT of ten or below as clinically relevant. Clearly, the effect size in the present analysis is of doubtful clinical significance. This point is strengthened by the fact that depression is a clinical condition for which many active antidepressants are already available.

Over there on the right is our old friend, Mr. Normal Distribution. If you take a large sample of almost anything, the values will look like this with the measurement variable across the bottom [abscissa] and the frequency of the values on the left axis [ordinate]. So you can describe a dataset that fits this distribution [most of them] with just three numbers: μ [the mean, average]; n [the number of observations]; and σ [the standard deviation, an index of how much variability there is within the data]. 95% of the values fall between two standard deviations on either side of the mean. That means that measurements outside those limits have only a 5% chance of belonging to this group – thus p < 0.05 [p as in probability].

In the examples on the left, all three might be statistically significant but looking at the p value won’t tell you anything about the likelihood it will be a property noticed by individual patients. It might be simple a chemical finding that’s imperceptible, or it might be a power-house. But p doesn’t tell you that. Regulatory bodies, like the Food and Drug Administration [FDA], were established to keep inactive patent medicines off the medical market, not to direct drug usage or attest to the magnitude of their effects. So the FDA insists on two well conducted Clinical Trials that demonstrate statistically significant differences between the drug and a placebo. The main task of the FDA Approval process is safety – what are the drug’s potential harms? Efficacy is a soft standard added on in 1962.

There are mathematical ways to use the data generated in a Clinical Trial to get at something more worth knowing – How strong is the effect of the drug? Just looking at the figures on the left demonstrates one such method – the Mean Difference between the two groups. How far apart are their μ values? measured in this example as units on the HAM-D Scale. That could be used to compare different studies if they all used the HAM-D [but they don’t]. Another problem: What if the variability [σ] of one group is very different from the other group? So they came up with a way to correct the Mean Difference by dividing it by the variance [of the whole sample]…
1 – μ2) ÷ σ1+2
… converting it into standard deviation units. It’s called the Standardized Mean Difference or Cohen’s d or sometimes something else. It can be used for divergent populations or even studies using different rating scales if they purport to measure the same parameter [eg HAM-D, MADRS, CDRS-R, K-SADS-L – all depression scales]. These are variants of measurements of the Effect Size – How strong is the effect of the measured difference?

There are no absolute standards for a meaningful Standardized Mean Difference. Cohen suggested 0.25 = weak; 0.50 = moderate; 0.75 = strong. But you could gather a bunch of statisticians and they could argue about that for the whole evening. The place where Effect Sizes are routinely used is in meta-analyses that compared multiple studies and/or multiple drugs. I guess you could say it’s a powerful relativity tool, often combined with a representation of the 95% Confidence Interval. In the example on the right from the article quoted above, the Standardized Mean Difference is on the Abscissa with the Confidence Intervals as the horizontal line [in this example, the 99% Confidence Intervals were used]. This is the format used in the Cochrane Systematic Reviews called a forest plot and it tells us a lot. The top 5 studies are unpublished Clinical Trials of agomelatine vs placebo. The weighted average is 0.08 in favor of agomelatine [which might as well be zero], and none are significant at the 1% level [the Confidence Interval line crosses zero]. The bottom 5 studies are published studies of agomelatine. The weighted average is 0.26 with only one significant at the 1% level. Overall, the weighted average Standardized Mean Difference is 0.18. In everyday language, a trivial effect with clear publication bias.

So why go through with all of this simplified statistical mumbo jumbo? It’s because these seasoned Cochrane meta-analizers take a very credible stab at translating their findings into the realm of clinical relevance in the colored paragraph above. Whether you use the Standardized Mean Difference of 0.18 evaluated by the values I quoted above, or the HAM-D Mean Difference of 1.5 HAM-D units as they did, these studies may be statistically significant at p < 0.05 [the published ones are], but they are able to conclude that it is not a clinically relevant difference, particularly when you look at the studies Servier neglected to let us see. So there’s my mythical connectome between the numeric part of my brain and the part that practices clinical medicine. Very satisfying.

So why don’t we see Effect Sizes plastered all over these Clinical Trials that have flooded our journals? They use the same data as the statistical calculations that are invariably prominently displayed. Would you publish them if your main goal was to sell agomelatine? Probably not because nobody would be very excited about either prescribing it or taking it. You’d display Effect Sizes and their Confidence Intervals if you wanted to give clinicians and patients as accurate as possible a notion of how effective the medication might be in relieving the targeted symptom – pending the later results of the reported responses in our offices where clinical medicine meets real live people in pain.

I’ve way overly-simplified some of this, probably didn’t get it 100% right, and left out how you measure Effect Sizes in categorical variables [eg response vs non-response] using the Odds Ratio or the NNT [Number Needed to Treat] [mentioned in the second paragraph above]. All I wanted to do is illustrate how this group is able to go beyond the simple statistical analysis found in most of these Clinical Trials by giving us some information that might help us in the actual task at hand [us being the clinician and a help-seeking patient]…
Mickey @ 11:01 PM

in need of another British Invasion

Posted on Thursday 11 September 2014


Pharmalot: WSJ
by Ed Silverman
September 11, 2014

In yet another sign of frustration with drug makers that do not release clinical trial data, the U.K. agency that is responsible for recommending coverage of medicines will ask European regulators for data if companies refuse to do so. The U.K.’s National Institute for Health and Care Excellence issued a statement yesterday calling for increased transparency from the pharmaceutical industry, which has often resisted calls to release some data over concerns that trade secrets and patient confidentiality may be breached.

Some drug makers, however, have recently taken steps to release data and make the information available to researchers. GlaxoSmithKline, in particular, has tried to lead this effort in the wake of a $3 billion settlement with U.S. authorities that was paid, in part, over allegations that some trial data was never disclosed. “We strongly believe that all clinical trial data should be made available so that those with responsibility for developing guidance and making treatment decisions have all the necessary information in hand to help them do so safety and efficiently,” Carole Longson, the director of the Health Technology Evaluation Center at NICE, says in a statement. In explaining its decision, NICE pointed to a recent controversy surrounding Roche and its battle with researchers, who accused the drug maker of refusing to release data about its Tamiflu medication. The researchers from the Cochrane Collaboration subsequently released a study showing the treatment was not proven to reduce the spread of the flu or its complications. Roche later called the analysis “seriously flawed” and has since agreed to provide greater disclosure.
The persistence of Peter Doshi, Tom Jefferson, and the Cochrane Collaboration in their quest for the Tamiflu data, the BMJ’s campaign in that fight, the AllTrials petition, and Fiona Godlee’s and Ben Goldacre’s appearances in Parliamentary hearings have apparently had an solid impact on the British government – now stepping up to the plate for data transparency.
NICE has actually been in favor of full disclosure for some time and, in fact, is one of the supporters of the AllTrials campaign, which was created last year to pressure drug makers for greater disclosure. The effort came about in response to the Tamiflu flap. The agency notes that, so far, nearly 80,000 people and 507 organizations have signed the AllTrials petition calling for increased trial data disclosure. Whether NICE would obtain the kind of data from regulators the agency imagines is uncertain. Next month, the European Medicines Agency is expected to release a policy on trial data disclosure, but the regulator has been accused of back pedaling on its commitment to transparency in the wake of settling a pair of court actions with drug makers that attempted to prevent the EMA from releasing data about certain drugs. As we have reported previously, European Ombudsman Emily O’Reilly claimed the EMA revised its policy in order to adhere to “the wishes” of the pharmaceutical industry and is now reviewing redacted records from those court cases – which involved AbbVie and InterMune – for clues to the EMA change in policy.
While those suits from AbbVie and InterMune are still discussed as if they represent the actions of two individual companies, that’s not altogether correct. Remember the leaked memo from a year ago [appended to the end of this post]. Those suits were part of an industry-wide initiative that seems to have become something of a lead balloon. At least in my case, they made me aware that the whole idea that the data from the pharmaceutical clinical trials were their proprietary property was not based on any particular law or decision. It was more something they had seized, but treated as if it were in the Magna Carta. Their arguments about Commercially Confidential Information and Patient Confidentiality began to melt and the tide began to turn. And as things are playing out, the EMA "U turn" seems to have thrown gasoline on the fire rather than put it out [see a crushing setback…, repeal the proprietary data act…, except where necessary to protect the public…].
NICE is not alone in seeking greater transparency. A recent survey found that an overwhelming number of members of the Royal College of Physicians in the U.K. also believe that such information should be disclosed and accessible. To wit, 81% agreed that drug makers have a moral duty to make completed data available to trial participants, the public and the scientific community. “My personal view on this is I can see no reason whatsoever not to publish all the data, and I think there’s a moral imperative from the point of view of the patients who’ve been part of the trials that their time, their effort shouldn’t be ignored,” NICE chair David Haslam told the U.K. House of Commons last week, according to NICE. “I think everything should be in the public domain.” As part of new guidelines for product reviews, he noted that NICE has strengthened procedures to ensure that medical directors from drug makers sign a declaration when they make submission to the agency and declare they have identified all clinical trial data.
And as I noted in in praise of anonymous, contactable members of the public, the actions of the ABPI [the British PhRMA] are pretty surprising [and welcomed]:
A spokesman for the Association for the British Pharmaceutical Industry sent us this:
    “The ABPI supports NICE’s decision to continue to ask companies to submit all relevant individual clinical trial data. It is welcome that NICE have recognized that pharmaceutical companies are the primary source of clinical trial information in the first instance and that companies should have a direct relationship with NICE in this regard. We believe that it is appropriate that NICE should only approach the European regulatory authorities if the pharmaceutical companies are unable to provide the requested information. The ABPI is committed to greater clinical trial transparency: we believe that clinical trial results should be posted in publicly accessible registries/databases and published in the scientific literature in a timely manner… Furthermore the pharmaceutical industry has been, and continues to be, committed to evolving and addressing the issues relating to transparency in clinical research.”
It can’t be lost on anyone that all of this positive movement is happening in Europe, in England, and not yet here in the good old US of A. It seems like it’s time for another British Invasion


[see a closing argument… and  How The Guardian’s Bias Towards One Leaked Memo Proves Greater Transparency is Needed From All]
[also notice all the trojan horses they had have planned…]

Dear members and colleagues,

please find below a message from Richard Bergstroem, EFPIA DG with respect to the various elements of the Clinical Data sharing debate, the assignment of responsibilities (including work with US PhRMA colleagues) and next steps

A. Forthcoming industry commitment, incl advocacy:
    The EFPIA Board has approved the draft position paper developed jointly by PhRMA and EFPIA. The final version is attached, and is now subject to confirmation by the PhRMA Board two weeks from now. PhRMA and EFPIA plan concomitant press releases in the week of July 22. The advocacy plan, previously approved by the two Boards is underway, and follows four strands:
    1. Mobilising patient groups to express concern about the risk to public health by non-scientific re-use of data.
    2. Engaging with scientific associations to shape the industry commitment for data sharing, and to discuss concerns about re-use of data.
    3. Work with other business sectors that are also concerned about release of trade secrets and commercially confidential data.
    4. For the long-term, build a network of academics across Europe that has the capacity to counteract mis-use of data (that is deemed to be happen in any case).
    There will be a series of meetings in Brussels, organised jointly by PhRMA and EFPIA, in the week of August 26 to advance these strands. This work (commitment and advocacy) is coordinated by [Redacted], in close cooperation with PhRMA ([Redacted] and [Redacted]), with oversight by Richard Bergstroem and [Redacted], PhRMA.
B. EMA consultation on draft :
    On June 24th , the EMA published its revised policy on the publication and access to clinical trial data for consultation. Comments are invited and should be provided to the EMA by 30 September 2013. Whereas the press release was quite balanced, the detailed proposal raises concerns:
    1. No process outlined to discuss CCI in CSRs prior to release.
    2. Raw data: unenforceable controls to ensure robust and scientifically credible secondary analyses.
    3. Requirement for anonymised raw data to be supplied at submission negates EMA’s responsibility for release of PP information.
    4. Publication of CSRs from withdrawn or unsuccessful submissions could undermine future commercial viability of product.
    5. Identification of study personnel.
    The EMA document takes into consideration the outcome of the process run by the 5 CT advisory groups earlier in the year to which EFPIA contributed through the input prepared by the 5 Temporary Working groups (TWG) set up under the SRM PC auspices.

    A detailed response will be prepared by a joint EFPIA-PhRMA team. The work will be led by [Redacted], Lilly, [Redacted]. From the EFPIA side the EFPIA TWG chairs (Rules of engagement, Patient confidentiality, good analysis practice, CT data format, legal aspects) will be part of the drafting group: [Names of four individuals within the drafting group redacted] PhRMA will assign a small group of people from the bigger EMA data disclosure WG. The drafting group will tentatively have a TC July 9. The final draft will be shared for consultation with the broader membership later this month.
C: EFPIA-PhRMA intervention in the AbbVie case:
    [Name], Pfizer, leads this work, in close cooperation with PhRMA and external legal counsel.
D: Clinical Trial Regulation:
    Advocacy directed at Council (and EC and EP) will focus on:
    • avoiding definitions of CCI in the CTR itself,
    • seek to delete preamble text that CSRs do not "in general" include CCI (even if current text is acceptable as fall-back position).
    The EFPIA PACT(Public Affairs on Clinical Trials) is responsible and will work closely with national associations and Brussels staff.
Regards,
[Redacted]
Mickey @ 9:49 AM