retro…

Posted on Thursday 4 December 2014

    a·nach·ro·nism  (-nkr-nzm)
    noun
    1. The representation of someone as existing or something as happening in other than chronological, proper, or historical order.
    2. One that is out of its proper or chronological order, especially a person or practice that belongs to an earlier time.

    from Latin anachronismus, from Greek anakhronismos a mistake in chronology, from anakhronizein to err in a time reference, from ana- + khronos time]

Perhaps I live in a rarified atmosphere out of touch with the pulse of things, but this strikes me as a relic of days hopefully behind us – something in the range of a double-knit leisure suit with bell-bottoms, or perhaps the Ghost of Christmas Past in a Dickens holiday TV special:
Remission in MDD
CME Outfitters, along with faculty experts Charles B. Nemeroff, MD, PhD, Roger S. McIntyre, MD, FRCPC, and Michael E. Thase, MD welcome registration for the upcoming neuroscienceCME Live and On Demand Program: Remission in MDD: What Does the Future Hold for Clinicians and Patients?
December 04, 2014

CME Outfitters [CMEO], a leading accredited provider in continuing medical education, announces its upcoming neuroscienceCME Live and On Demand activity, Remission in MDD: What Does the Future Hold for Clinicians and Patients? The live program will launch on Wednesday, January 7, 2015 at 12:00pm ET. Faculty experts Charles B. Nemeroff, MD, PhD, Roger S. McIntyre, MD, FRCPC, and Michael E. Thase, MD will lead an interactive one-hour discussion that will challenge clinicians to reinvent the future of major depressive disorder treatment which includes increased patient participation in the treatment process, measurement-based care and a re-definition of treatment to remission as improvement of all symptoms of MDD with functional recovery. Psychiatrists, primary care physicians, nurse practitioners, physician assistants, nurses, pharmacists, social workers, clinical case managers, and other health care professionals who share the goal of achieving remission in patients with MDD are encouraged to register.

“We have a faculty panel that exhibit some of the brightest minds in mental health care today,” said Jan Perez, CCMEP, CME Outfitters Managing Partner, “It has been a pleasure working with these experts, and I think the viewers are going walk away from this program with many new clinical tactics that they can immediately implement into practice to improve the lives of their patients with MDD.”

At the end of this activity, participants should be able to:
  • Evaluate all patients with major depressive disorder [MDD] for residual symptoms with a validated tool at each visit.
  • Initiate a treatment plan that involves patient participation to address residual symptoms of depression.
This program will present a lively and interactive one-hour clinical discussion, following by 30 minutes of Q&A via telephone and/or web. Click here for more information about the faculty, financial support, credit information, disclosures, and to register.
Well, this really is a blast from the past even if it does say reinvent the future in the blurb. It’s given by three KOLs from the heyday of what we might call the Antidepressant Era – a time when we’d forgotten the distinction between Major Depression like that seen in Manic-Depressive Illness and Melancholia and the much more common depressions once called Neurotic Depression. We’d forgotten that depression, even the true Majors, was known to be time limited. We’d forgotten that people get depressed because of their life circumstances and were seeing depression more like an affliction of the physical kind, an entity. It was a time when the notion of treating to remission was all the rave and coming to us from TMAP, the STAR*D study, CO-MED, that series of programs and studies arising from Drs. Rush and Trivedi at UT Southwestern. It was a time when depression not responding to medicine gained a moniker of its own – Treatment Resistant Depression [TRD] and was being attacked with schemes like sequencing, combining, or augmenting to enhance antidepressant effectiveness. It was also a period when these presenters were at the top of their game, with Dr. Nemeroff holding forth at Emory. Our journals were filled with review articles and novel treatments. But those were the days before Senator Grassley had revealed the unreported pharmaceutical income flowing into pockets where it didn’t belong [and Dr. Nemeroff abruptly changed universities]. So it’s hard to believe that there’s anyone who hasn’t heard all of what’s advertised here. It’s been around for such a long time. But that’s not why I posted it. There’s a new wrinkle in the fabric:
  • Evaluate all patients with major depressive disorder [MDD] for residual symptoms with a validated tool at each visit.
Back in what I’m calling the Antidepressant Era, there was a push for automation, short-cuts in following patients. They talked about depression like it was a condition like anemia and that one could follow it with something like the serum hemoglobin. Lacking such a marker, there were any number of tries at inventing a simple surrogate. Dr. Spitzer and colleagues developed a brief scale for measuring depression, the PHQ-9 [Patient Health Questionnaire] based on the then new DSM criteria that was distributed by Pfizer. When the STAR*D came along, there were other of self-rated Depression scales, the IDS [Inventory of Depressive Symptomatology] and the QIDS [Quick Inventory of Depressive Symptomatology]. The QIDS was developed as an automated telephone option. And in the STAR*D reporting, these instruments [clinician administered, self administered, and telephone administered] were mixed in ways I never could quite figure out. Also, that was a time of algorithms for choosing the antidepressants, including computer programs to pick and change the drugs – though the actual NIMH funded study of computer directed treatment never got off the ground – the clinicians wouldn’t use it [IMPACT]. All of this was called Measurement Based Care. It was as if this entity called MDD could be diagnosed and treated automatically with almost no human contact [see a thirty-five million dollar misunderstanding…].

Recently, there’s a new kid on the block – CAT-D. A couple of years ago, Dr. Robert Gibbons, a statistician teamed up with Dr. David Kupfer [in charge of the DSM-5 Revision] and others to introduce a new test – a computerized scale using artificial intelligence technology to produce a depression index by answering only a few questions on the computer, or a smart phone. It was later revealed that this was a future commercial product, already incorporated, that was in line with the notion of a new "Dimensional" axis for diagnosis planned for the DSM-5 – raising questions about the involvement of the company’s principals, all of whom had a place in the DSM-5 Task Force developing such a commercial product without declaring the COI [see open letter to the APA…]. Since apologizing for not noting that this is a commercial product in their journal articles [Failure to Report Financial Disclosure Information], these authors/entrepreneurs haven’t said much about their instrument, obviously targeted towards screening for depression or following treatment.

So when I read about this retro and anachronistic CME from Dr. Nemeroff et al, I understandably wondered if the "validated tool" mentioned was the CAT-D. about to make its debut on the world stage as a way to "Evaluate all patients with major depressive disorder [MDD] for residual symptoms" – its time come round at last. It occurred to me because the "validated tool" is so cryptic, and Dr. Gibbons has been at Dr. Nemeroff’s University of Miami Grand Rounds in the last year or so [and they run in the same circles?]. Just a bit of a paranoid fantasy I’m having. Oh yeah, don’t miss the anachronistic COI statements
Mickey @ 4:33 PM

a keeper…

Posted on Tuesday 2 December 2014

Well. it seems far, far away from now. It was the 1960s, and the notion that someone on a medical faculty might have financial ties to a pharmaceutical company was unheard of. Coming this way into the 1970s to a psychiatry residency and faculty appointment – the same unheard of. I wrote about my first encounter with a drug company sponsored event in a training program a few years back [repressed memories…]. It was 1983-ish. There was a new chairman, and it was a new day [post 1980 DSM-III]. The new chairman wanted me to start a Grand Rounds series with outside speakers. We had no funds for such a thing, but he said not-to-worry, he’d take care of it. Here’s what I wrote four years ago looking back:
    At one of the first Grand Rounds, he had someone from the faculty of the University of Georgia. I thought it was a great idea. There hadn’t been much dialog between Emory [Atlanta] and the Medical College of Georgia [Augusta], and this seemed like a good way for us to get together. The lecturer who came had pretty slides, but as he talked, it began to sound like a sales pitch for Mellaril [Sandoz] rather than a scientific presentation. I was  disappointed  appalled. I thought the new Chairman would be embarrassed, but he didn’t seem to be. I left Emory at the end of that year. It was mutual, I think. I just didn’t fit anymore. So I didn’t think about that presentation for a decade. The speaker that day was Dr. Richard Borison, who became Chairman of Psychiatry at the Medical College of Georgia. He was a drug researcher, and I started seeing his name on journal articles, always drug studies…

Over a decade later, Chairman Richard Borison’s Clinical Research Center was "busted" and he was sent off to prison for embezzling millions from the University of Georgia [Drug Money Patients Worsened;  Little Oversight Provided]. But I’m getting ahead of myself. At the time, I was both appalled at having a drug company sponsored infomercial for a Grand Rounds and also understood why the new chairman did it. We were flat broke. Money for training had just dried up and we were literally running on flumes. All of this happened shortly after the DSM-III appeared. Others more in-the-know than I assure me that industry had nothing to do with the DSM-III Revolution – that it was later opportunistism. But I still wonder from time to time.

In the years that followed, I was no longer directly involved, but even from across town, it was obvious that they were in the land of milk and honey through Emory’s Nemeroff years [1991-2009], and there was nothing subtle about the industry sponsorship. It was on display – the norm. For a long time, industry ties weren’t apparent in journal cites. Then they were declared in COI statements [and they were everywhere!]. So I find reading this Editorial in the British Medical Journal that goes even further very satisfying:
Editorial
by Mabel Chew, Catherine Brizzell, Kamran Abbasi, And Fiona Godlee
British Medical Journal 2014 349:g7197.

Zero tolerance on education articles with financial links to industry
The BMJ was one of the first medical journals to seek declarations of competing interests from authors. Our focus is on financial competing interests as we believe these to be the most identifiable. We do, however, understand that competing interests come in many forms and we also routinely ask authors to declare relevant non-financial competing interests. The governing principle has been that transparency is a panacea. We placed faith in this principle, but mounting experience and evidence tell us that we were only half right. Transparency remains essential, but it isn’t sufficient to eliminate bias or perception of bias.

We believe this risk of bias is particularly important for clinical educational articles that are designed to guide patient care, when authors’ biases may be less visible to general medical readers. For some years we have sought to minimise as well as declare competing interests for these articles. Recently we introduced more active management of competing interests, requiring authors to complete a more detailed declaration and excluding authors with close ties. Now we have decided to go a step further, as heralded three years ago. From next year our clinical education articles will be authored by experts without financial ties to industry [box]. By industry we mean companies producing drugs, devices, or tests; medical education companies; or other companies with an interest in the topic of the article. We are phasing in this policy to start with editorials, clinical reviews, and most practice series. We hope that by the end of 2016, this will have extended to the rest of our education section: our specialist state of the art reviews and diagnostics and therapeutics series.
  • Competing interest definitions and process for The BMJ’s editorials and education articles [including clinical reviews, practice articles, and state of the art reviews]
  • “A conflict of interest arises when a person has a personal or organisational interest that may influence or appear to influence the work they are doing. Usually this is a financial interest, but it may also be non-financial.”
  • We ask authors to declare interests in the 36 months before the declaration and those known to be going to occur during the next 12 months
  • Authors are asked to complete a form, available at www.bmj.com/sites/default/files/attachments/resources/2011/07/current-bmj-education-coi-formfinal-1.doc. For unsolicited articles, we also ask who prompted submission and whether professional writers contributed
  • Each author’s declaration is carefully assessed by the handling editor, and may be discussed at a regular editors’ meeting, to ensure our decisions are consistently and fairly applied by the editorial team.
  • We have started publishing authors’ competing interests forms alongside the articles, and advise authors of this when they send their forms. We plan to do so for all editorials and education articles
  • From 2015, we will roll out a policy of editorials and clinical education articles authored by experts without financial ties to industry [companies producing pharmaceuticals, devices, or tests; medical education companies; or other companies with an interest in the article topic]

Shift in culture
Why are we doing this? The first reason is that making clinical decisions based on information biased by commercial interests can cause harm, as happened with cardiotoxicity from rosiglitazone and rofecoxib and continues to happen with hydroxyethyl starch. We also believe that the educational content we publish will have more impact if readers can trust it. We know that readers consider research papers written by authors with declared financial links to industry to be less important, relevant, rigorous, and believable; they are also less willing to prescribe drugs evaluated in such papers. Finally, we want to encourage a shift in the culture of medicine. We think that we can help to do this by promoting authors without financial ties to industry and offering them appropriate prominence and visibility.

Financial competing interests are endemic to the culture of medicine and are rarely driven by malign motives or actions. The mechanisms of influence are diverse. An author of a review article might be an advisory board member for companies selling drugs for that condition, a commentator might have received honorariums from industry for lectures on the topic, or an editorialist on a disease might be a patent holder for one of its diagnostic tests. Psychological research suggests that biases may operate subconsciously. Our decisions not to proceed with an article or an author are not made lightly. Nor are they intended to pass judgment on an author’s integrity. However, we cannot ignore the mounting evidence of systematic attempts by commercial interests to corrupt the literature and influence clinical decisions. Internal company documents revealed during litigation expose practices aimed at influencing clinicians such as funding medical meetings, dinners, studies, and articles. Many clinical practice guidelines are little more than industry marketing tools because of the financial competing interests of their authors and sponsors…
During all those years, I didn’t like the prominence of pharmaceutical influence everywhere, but may have mistakenly thought it was like the Grand Rounds I mentioned – desperation financing. It was that, but so much more. It was ghost writing, jury-rigged Clinical Trial reporting, Speaker’s Bureaus, payola, etc. It wasn’t just money flowing to institutions, it was going directly into the pockets of academic physicians and other KOLs. Industry was able to buy prominent doctors with a small fraction of the money they made by having them on board. I would now say that my little Star Wars graphic isn’t that far off the mark. It was something of a Galactic Empire and it’s still there. I think of psychiatry as the worst offender, but that may only be because I know more about what went on. But when I look at my television set or hear the MRI machines whirring in the background when I go in a hospital, I suspect it’s all over Medicine proper too [it being outside influences pushing the profitability of Medicine wherever it can be pushed in whatever way it can be pushed].

And we can’t operate on a bad apples theory. Too many physicians in high places were [are] on board [just look at PROPUBLICA’s Dollars for Docs or the Sunshine Act site]. We have to swallow our disillusionment and assume that enforcable [and enforced] stops will be required to keep it from continuing and/or recurring [same it]. We have now a whole generation of physicians who’ve grown up in this climate, with too many participating, and they need special attention. So it’s not just a matter of putting restraints on industry, it’s equally important to focus on physician collusion and respond decisively.

Fiona Godlee, BMJ EditorAnd as for Dr. Fiona Godlee and her editorial staff at the BMJ – center of right-thinking for a long time: she’s testifying in Parliament, leading in the fight for the Tamiflu data, setting policies like this one, doing everything in her power to lead us out of the wilderness and "encourage a shift in the culture of Medicine." Does this kind of activism count in Stockholm? Do journal editors qualify for Knighthood? Is there some way to recognize her persistence and her savvy in knowing what rocks to look under? Could she be cloned? Fiona Godlee is a definite "keeper."
Mickey @ 12:19 PM

déjà vu…

Posted on Monday 1 December 2014


The Lancet Psychiatry. 2014 1[6]:403.

At The Lancet Psychiatry we aim to publish research that illuminates and changes clinical practice. Changing practice requires a high standard of evidence, although existing practice does not always have a solid scientific base. Management of mental health often seems intuitive, so many interventions have been developed and rolled out on the basis of good intentions rather than good science [eg, post-traumatic stress disorder counselling after natural disasters]. We do not deny the role of clinical expertise and the art of the individual psychiatrist, but we believe that studies aimed at altering the status quo should be rigorous in planning, execution, and communication.

The acknowledged gold standard in terms of research is the randomised controlled trial [RCT], a method that should be applied to all types of intervention, wherever possible. In psychiatry, there are also good pragmatic studies of intervention outcomes where RCTs are not feasible, the most informative being those that combine cross-sectional and longitudinal observations. Excellent work is done using registries, particularly where health-based registries can be linked with others, such as those held by the educational and justice systems. For each type of study, there are recognised guidelines that list what should be measured and reported [compiled by the EQUATOR network]. The best-known is the Consolidated Standards of Reporting Trials [CONSORT] statement. CONSORT is itself evidence-based, and for each item there is an explanation of key methodological issues and the importance of reporting that item. Many medical journals now include CONSORT in their instructions to authors, but the slow rate of progress can be seen by looking at the search results for any meta-analysis — too frequently, studies have to be excluded because insufficient information is available.

Some claim that psychiatry, especially psychotherapy, is not suited to such rigorous approaches. Psychotherapist and author Darian Leader has stated that “the criteria for the evaluation of therapies has moved to a very narrow view of evidence, based on the medical model of randomised-controlled trials … with a control group, and so on. You can’t do that with therapy, because the whole point of therapy involves the beliefs the person has initially about their treatment or therapeutic experience. So you can’t randomly assign someone to a therapist.”

The Lancet Psychiatry admires this focus on the individual, but believes a strong evidence base is both possible and necessary. In the interests of all those who entrust their lives and well being to mental health professionals, it is time to level the playing field. All interventions should be assessed to the same standards of evidence, from psychopharmacology and psychotherapies, to brain stimulation technologies and new approaches such as video games and apps. Trials should be registered in a publicly available database; the protocol should be available, and the methods and results reported must match the protocol to avoid publication bias. Most important, sufficient information should be provided to enable replication of the study. The basis of scientific research is validation and refutation. For compounds, this means chemical composition, formulation, and dosing schedules. For psychological therapies, it means details of how many sessions, the length of sessions, availability of a manual where a specific therapy was given, details about the therapist, and evaluation of differences between specific therapists and sites. For training interventions, it means access to the training material, and details of the trainers, sessions, etc. This approach also applies to models of care: if something has the potential to be useful, it must be replicable. Circumstances will differ and where people are involved there will always be considerable variation, but providing relevant detail will enable other researchers to test the data and perhaps to explain different outcomes. Where trials are not possible, data on outcomes — including those associated with general well being and function rather than specific symptoms — are necessary. One essential set of observations is that of adverse events. Again, there are people who claim that the different nature of medicine and psychotherapy means that, although it is relevant to collect adverse event data for the latter, to do so in the same manner for the former is the equivalent of comparing apples with oranges. However, The Lancet Psychiatry believes that if an intervention has the potential to confer benefit, it also has the potential to cause harm.

This rigour is needed not just to satisfy journal editors but to convince researchers, clinicians, patients, and ultimately governments. If we want mental health services to be free at point of access, we must demonstrate that they work.
I haven’t run across this argument for some time. It was the constant rallying cry in the lead-in to the neoKraepelinian revolution and the DSM-III in the 1970s – directed primarily against the psychoanalysts and other psychiatrist psychotherapists who were billing medical insurance for their services. The outcome was a split in mental health care, at least in mental health care paid for by third party payers. Thereafter, psychiatrists stuck to the biological side of the street, primarily focused on psychopharmacology.

Carrier reimbursed psychotherapy didn’t disappear. It was taken over by other mental health specialties with controlled frequency, duration, and negotiated rates of payment. A psychiatrist colleague recently suggested to me that a physician [like me] shouldn’t be a psychotherapist based on a different argument – economics – saying that it costs too much to train a physician and the physician psychotherapist would therefore not be cost effective [the latter part implied].

I am a psychiatrist and psychotherapist, but I didn’t post this article to argue about psychotherapy. There is much about what it says that I agree with and I have no complaint about Lancet Psychiatry having high scientific standards for what they publish. Mostly, I just appreciated that the article doesn’t have the usual contempt, sarcasm, or ludicrous examples that often accompany pieces containing the word psychotherapy. My reason for posting it has to do with something else entirely – this statement:
    The acknowledged gold standard in terms of research is the randomised controlled trial [RCT], a method that should be applied to all types of intervention, wherever possible.
There was a time in my life when I would have automatically agreed with that statement. In a first career, I was a very medical physician involved in biological research. I was in love with any and every thing about the scientific method. That’s still true. But over the last six years as I’ve spent a lot of time looking at the domain of Randomized Controlled Trials [RCTs], my thoughts about that statement have changed dramatically – even though it seems like it ought to be correct.

The most obvious objection is the extent to which RCTs can be corrupted, manipulated, jury-rigged, etc. The scientific misbehavior in the RCTs of psychiatric drugs is staggering. I could never have imagined anything like it could even happen. So I spend my days involved in trying to insure that the kind of perverse science we’ve seen in psychiatric and other drug research never happens again. At the moment, access to the raw data from trials with independent analysis seems the best approach to what has gone before.

But even without the corruption, there’s more to be said about RCTs. I think they are essential as a beginning take on cataloging the adverse effects of drugs [psychiatric and otherwise], but only a beginning. Many adverse effects come with chronic use, so ongoing reporting is an essential ingredient in any accurate understanding of drug toxicity [for example, David Healy’s Rxisk site].

One might think that cleaning up  the corruption, the distortion, the exaggeration might make RCTs the preferred standard in efficacy studies as well, but I’m not sure that’s totally right either. RCTs can be too sensitive – detecting small effects that are not clinically relevant even if they are statistically significant. They’re limited by their time frame, the outcome instruments, the subject recruitment and evaluation processes, and then there’s the placebo effect. The results are often hard to replicate, thus the increasing reliance on meta-analysis of multiple studies in evaluating overall efficacy.

The application of scientific principles in medicine is not like the strength of materials course in Civil Engineering training, no matter how much we’d like that to be true. There are too many parameters and inter-related forces at work for that kind of precision. That’s even more true in the region psychiatrists and psychotherapists haunt – the world of subjectivity. RCTs can sometimes add a degree of clarity, but sometimes not so much. In my specialty of psychiatry, the track record of RCTs is anything but exemplary. So I don’t think any single methodology is made out of gold or achieves level. That’s what makes things so confusing and leads to so much contention. It’s also what makes it all so incredibly interesting…
Mickey @ 2:00 PM

a paradigm….

Posted on Sunday 30 November 2014

I just wrote three posts reviewing the case of Dan Markingson, but I’m not going to even post them. We all know that case only too well. Over Thanksgiving, Carl Elliot posted a news video of a new whistleblower [INVESTIGATORS: Nurse questions integrity of U of M drug researchers], nurse Nikki Gjere who was a nurse coordinator who was there when Dan was hospitalized. She was the person other nurses spoke to at the time. Her video report is definitive and damning. I’m not going to go through the whole case again because she confirms exactly what Carl Elliot said four years ago in The Deadly Corruption of Clinical Trials which couldn’t be clearer. Dan was inappropriately put in a questionable experimercial trial [conducted for commercial rather than scientific reasons] under the most absurd of circumstances – an involuntarily committed patient declared incompetent who was allowed to volunteer to be in a trial so long as he took the medication in return for a less restrictive environment, when the trial endpoint was how long the patients would voluntarily stay on the medicine [which he couldn’t stop]. After six months with no improvements on the medication, he brutally killed himself. Why was he in a such an obviously wrong and dangerous place? The investigators were behind on their recruitment is the most likely answer [see almost inevitable…].

    par·a·digm  (pr-dm)
    noun
    noun: paradigm; plural noun: paradigms
    1. [technical] a typical example or pattern of something; a model.

The case of Dan Markingson is a paradigm representing something terrible, a period in our medical history when the scientific processes designed to evaluate medications for use in the treatment of illness were perverted and used for commercial purposes. Surely, with the addition of such strong testimony as that of Nikki Gjere, the long avoided investigation of this case will finally become a reality. There are others: Paxil Study 329, a trial that fallaciously reported that a medication was effective and safe in childhood depression; Seroquel Study 15, a trial that was definitive but unpublished because the sponsor didn’t like the outcome. But these paradigmatic trials are just the tip of an iceberg of scientific misbehavior – all in the service of pharmaceutical profits. And the ticket into the scientific literature were the names of prominent academic physicians on the author by-line. Many of these articles were analyzed by the sponsoring corporations that stood to gain and deceptively presented by professional writers skilled in the art of spin. 

In Dan’s case, he had a particularly virulent condition. His mind was dominated by an apocalyptic complex delusional system in which he believed he would be called on to be a killer. It was a classical presentation of paranoid schizophrenia or whatever you choose to call it now. But it’s often the least medication responsive and the most likely to be acted on. He needed a vigorous individualized treatment program aimed at what he had, not some commercial enterprise like the CAFE study offering a generic blinded trial of medication. There is no remotely rational justification for what happened in this case.

Below is the current posting on Carl Elliot’s blog, and I hope he/they can find a way through legal channels to finally break the log jam that is blocking this case from being brought into the light, becoming the paradigm it deserves to be for much needed change. It’s worth reading the whole letter:

An excerpt from Leigh Turner’s letter to Minnesota Attorney General Lori Swanson, after Tuesday’s revelations by Niki Gjere:

University officials like to emphasize that Dan Markingson died ten years ago. They claim his case is “old news.” However, if the numerous local citizens that have contacted Professor Carl Elliott and me are credible, and I believe they are, Dan Markingson is one person among numerous victims of psychiatric research misconduct. Some of these victims reportedly were harmed after Markingson’s death. Perhaps treating Markingson’s death in the CAFÉ study as ancient history not worth revisiting has enabled more patients to be harmed. Niki Gjere has the same concern. Asked  by Mr. Baillon about her “greatest fear in the current environment,” she responded, “That it continues to happen. That patients are continuing to be harmed, that nothing’s been fixed.” 

Responsible university presidents investigate reports of research misconduct. Unfortunately, President Kaler functions more as a college mascot – a Goldy Gopher in a suit and tie – than as an accountable leader. The University of Minnesota is never going to conduct an honest and thorough investigation of alleged psychiatric research misconduct. You need to do it. Any additional delays risk putting more vulnerable individuals at increased risk of harm.
Mickey @ 7:59 PM

along the way…

Posted on Friday 28 November 2014


British Medical Journal
by Tom Jeffeson and Peter Doshi
27 Nov 2014
… On 24 November 2014, the European Medicines Agency [EMA] released a new “Guide on access to unpublished documents.” The guide follows in the steps of several other policy documents, charting the revolution from a closed shop to what is probably the most liberal experiment in regulatory data sharing on the planet.

The six page guide is written clearly, as you would expect from a document for “anyone” interested. The guide tells you how to apply for documents held by the EMA as part of the process of central pharmaceutical regulation. We learn that English is almost certainly the language of the released documents, but you can apply in any language of the European Union. Other sections tell you what will happen to your request, and what options you have in the event that your request is turned down or not answered in time.

Readers of the guide are warned that the release of large and complex documents may take place in batches over a long time. This certainly has been our experience and presents a serious problem for independent researchers who work on deadlines. We applied for clinical study reports from trials of a global public health intervention, and we’ve yet to see more than 10% of the text six months after the ball got rolling. The EMA tells us that the documents are in preparation, but so far there’s little meat.

Gone, it seems, are the good old days of data request. On 10 January 2011, we requested around 20 clinical study reports on Tamiflu. By the end of May, the EMA had sent us 25,000 pages of unredacted text. One presumes that the EMA’s workload has exploded since those early days. But is this the case? We await increased transparency on the to-ing and fro-ing with the marketing authorisation holders, to understand more precisely what happens as one waits. Where are the delays occurring? How can the system become more efficient? As we await answers to these questions, we think it’s time to address an equally serious problem: the lack of a menu.

While we applaud the EMA’s efforts to provide a guide that makes requesting documents easier, we are concerned that the guide does not tell us what’s on the menu at the EMA restaurant. For hungry people this is a bit of a problem, but even more so for the restaurateur…

So let’s have a list of holdings by marketing authorisation application, with the dates and types of documents held included. And while you are at it, please explain what’s in each document in plain language so that “everyone” can order the right dish.
Tom Jefferson and Peter Doshi shepherded the groundbreaking quest for the raw data on Tamiflu that ultimately lead to the Cochrane Collaboration meta-analysis the concluion that the multibillion dollar government stockpiling for an epidemic was ill advised. It has become the rallying point in the movement[s] clamoring for Data Transparency in medical clinical trials. In this blog post, they’re focusing on something the rest of us have no way of knowing much about – the process of actually getting the data in hand to reanalyze and in particular, what data is available.

It’s easy to see  conceptually why the consensus has settled on having the raw data as a powerful way to put a much needed damper on the massive over-prescription and over-use of medication in medical practice. But it rests on the availability of independent scientists to do the analyses – probably gratis. And that, in turn, rests on how easy this data is to come by and work with. These clinical study submissions each have thousands of pages – challenging in their own right. But in this blog, Tom and Peter are talking about something that comes before anything, knowing what studies are available – apparently no easy task. Only then can one begin the application process [which this EMA guide apparently clarifies].

And it’s not just an issue with the EMA. The various pharmaceutical companies are creating their own unique versions of the conditions and modes of access that might be available for future independent analyses. So, as exciting as the recent advances have been in the fight for public access to the raw data from clinical trials, as is always the case – the devil is in the details. And in the best of circumstances, the process will be hard with long wait times. Even if the NIH/HHS’s recent pledge to make sure that the requirements for posting the results of Clinical Trials on clinicaltrials.gov supplies the advertised result, the trial sponsors still have a year in which to post the data, and even then, it isn’t exactly "raw."

A couple of years, ago, Ben Goldacre famously ended his TED Talk on this topic by saying,
"I think that sunlight is the best disinfectant. All of these things are happening in plain sight, and they’re all protected by a force field of tediousness. And I think that with all of the problems in science, one of the best things we can do is to lift up the lid, finger around with the mechanics, and peer in."
No doubt about that,  but Jefferson and Doshi have gone through the actual process of getting through that "force field of tediousness" and are taking Goldacre’s comments a step further by pointing to just one of the many difficult elements along the way.
Mickey @ 9:00 AM

happy thanksgiving…

Posted on Thursday 27 November 2014

Mickey @ 6:00 AM

ombudding…

Posted on Wednesday 26 November 2014


European ombudsman questions European Medicines Agency over AbbVie redactions
by Rory Watson
British Medical Journal. 2014 349:g6904.

The European ombudsman, Emily O’Reilly, has written to the European Medicines Agency asking it to explain by 31 January 2015 the redactions it made in the clinical trial data provided by AbbVie for its best selling drug adalimumab [Humira], used to treat rheumatoid arthritis. The redactions were made as part of the settlement between the agency and the company after AbbVie had gone to court to try to prevent publication of certain clinical trial data.

In her letter O’Reilly acknowledged that “certain redactions may be justified to protect the personal data of patients.” She also accepted that “certain other redactions, which mention the names of companies that provided services to AbbVie, or the names of software used by AbbVie, are not, in my view, problematic, as they may be considered to relate to the confidential business relationships of AbbVie.”

But after examining the original and the redacted versions of the clinical trial data, internal agency communications, and correspondence between the agency and the company, she expressed “doubts and concerns as regards other redactions.” O’Reilly, who opened an investigation into the case in April, identified 16 instances where she wanted the agency to explain why the redactions were necessary to protect AbbVie’s legitimate commercial interests…
I’m impressed with the position of European Ombudsman. She apparently has the necessary power to open investigation of just about anything. This AbbVie suit and settlement is the one that got the whole EMA Data Transparency trajectory off-track. Ombudsman Emily O’Reilly goes on to question redactions that don’t seem to relate directly to either Patient Confidentiality or to Commercially Confidential Information. Kudos for her promptness in acting and the specific nature of her complaints. It’s only through speedy and focused investigations of this sort  that we will make genuine progress in cleaning up the Clinical Trial process. We could use a US equivalent, an American Ombidsman, doing the same things here.
Mickey @ 9:00 AM

just a thought…

Posted on Tuesday 25 November 2014

While it’s not my usual fare, I saw something on Twitter that reminded me of a thought from long ago and I chased it down. When I was an Internist, it was the early days of treatment guidelines. They were beginning to show up frequently. They began to move into areas of preventive medicine – like when to treat high blood pressure, risk factors for heart disease, diabetes, cholesterol levels, etc. I didn’t like some of them, actually more than some. Many seemed like the suggestions from people based on small differences by people with a monocular focus. Some didn’t bother me. I didn’t mind commenting on smoking, obesity, etc. – the usual suspects. But treating minor blood pressure elevations was anything but benign. The medications of the day made people sick, or at least feel bad. I had no real confidence that the recommendations were true representations of the patient’s real future. It wasn’t the reason I changed specialties, but I didn’t miss the increasingly stringent guidelines when I left. I guess I saw my job as treating sick people. As much as I respected the wisdom and miracles of preventive medicine, I thought many of the then newer recommendations were pushing the preventive metaphor too far.

Here so many years later, that feeling is still with me. I don’t feel comfortable at all with the Statins for minimal lipid elevations, and don’t take them myself. Nowadays, there’s the added concern about industry interference. Today, on the way home from the clinic, I passed a pharmacy with multiple signs stuck  by the curb advertising vaccines of many kinds, only one of which made sense to me. Last week at a routine physical, my doctor mentioned several that I question. I didn’t say anything [but the look on his face suggested that he was on to my skepticism]. It’s not a big deal to me, but I do think that the many recommendations and reports that are all over the evening’s news are often examples of very small differences – differences that don’t necessarily make a difference.

So on Twitter, I saw this graph from a recent Japanese study…
Low-Dose Aspirin for Primary Prevention of Cardiovascular Events in Japanese Patients 60 Years or Older With Atherosclerotic Risk Factors

[click for full text]
… published with this editorial. It was intended to be a 6.5 year study but was stopped at five years for futility – a term for going nowhere, not changing over time.

In the editorial, they review the earlier studies in detail in what seems to be a balanced way and I leave that to you to read for yourself. In my reading all of this, I thought about two things. First, the risk/benefit ration in this case is clear – a baby aspirin a day is near no-risk for the overwhelming majority of people. It is unlikely that this issue gets into Conflicts of Interest or much in the way of industry interference [even if "baby aspirin" costs more than adult doses of aspirin these days].

My point isn’t about baby aspirin. I don’t take them myself, but if I had exertional chest pain, I might. If I ever have an oppressive substernal chest pain radiating down my left arm, I bet I’ll take Aspirin first, then call 911. My point is about the use of statistics in medicine in general. I’m moderately savvy as doctors go about statistics – a numbers guy by nature who likes quantification in almost any flavor. But statistical predictions feel out of hand to me. My specialty, psychiatry, has gone through a very long period where small differences have often been magnified to an outrageous degree and mere statistical significance has been presented as a surrogate for true clinical relevance. It’s not.

That Japanese study was on the news as expected. Should anyone change what they’re doing because of the news reports? I don’t think so. No one study is worth making changes unless it represents a real danger. But considering the general trends in reported research and the rapid dissemination of information in modern times, medicine as a whole would be well placed to spend some real time on the question of clinical relevance. We owe our patients help sifting through the vast amount of information that bombards us from all directions in both our prescribing and in education. My personal preference would be for there to be a specialty or an agency that evaluates all of this minute to minute reporting of preventive medicine advice and tries to separates the wheat from the chaff with the specific charge of turning statistical significance into the more important parameter – clinical relevance which is often a yes-no rather than statistical question.
Mickey @ 9:36 PM

you couldn’t make this stuff up…

Posted on Monday 24 November 2014

So remember Guido Rasi, head of the European Medicines Agency, the guy who has been the architect of their policy on Data Transparency? [see European Medicines Agency: a timeline…, game on…]:
Regulatory Affairs Professional Society
By Alexander Gaffney, RAC
14 November 2014

In a major development, the head of the European Medicines Agency [EMA], Guido Rasi, has been forced to step down by the EU Civil Service Tribunal after adjudicators found that the European Commission had improperly selected him in 2011. The case against Rasi’s appointment was filed by Emil Hristov, formerly with the Bulgarian Drug Agency and a member of EMA’s board, who maintained that EMA and the European Commission had improperly assembled a short list of candidates for the position of EMA’s executive director after the controversial departure of former executive director Thomas Lönngren in 2010. Hristov had applied to the position, but after reaching the interview process for the interview, was ranked last among eight candidates and was not selected to be on a "short list" of candidates presented to EMA’s Board of Directors as being the most suitable for the position. The assembly of that "short list" of candidates by a selection committee was improper, the court found, and the EMA board should have been able to consider the candidacy of Hristov, who they were familiar with.

Therefore, the selection of Rasi by the European Commission is to be annulled, the court ordered. The decision has immediately ramifications for EMA and Rasi, who has been removed from his position and replaced on an acting basis by Andreas Pott, now deputy executive director of EMA. “I note with regret today’s judgment by the European Union Civil Service Tribunal," said Professor Sir Kent Woods, chair of the Management Board, in a statement. "It is important to remember that the ruling is about a procedural formality. It is not a reflection on Guido Rasi’s competence or ability to run the Agency, something which he has done successfully since November 2011.”

Both EMA and the European Commission are scrambling to see if the decision can be overturned, they said. The departure of Rasi comes at a particularly tricky time for EMA, which is in the process of implementing a massive change in the way it treats clinical data transparency. Transparency has been one of the hallmarks of Guido’s tenure as EMA’s executive director, and his departure could very well derail some of his signature initiatives.
And he’s been quickly replaced…
Andreas Pott Again Tasked with Steadying EMA in Turbulent Times
Regulatory Affairs Professional Society
By Nick Paul Taylor
20 November 2014

Andreas Pott has a habit of being thrust into difficult situations. In 2010, Pott was appointed acting executive director of the European Medicines Agency (EMA) at a time when the regulator was dealing with allegations of conflicts of interest, disputes with the European Parliament and an early row over transparency. Now, Pott is once again in the hot seat during a period of turbulence at EMA. The factors behind Pott’s latest stint as acting executive director relate to the first time he took the role. Back then, Pott held the post during the protracted and messy transition between the reigns of Thomas Lönngren and Guido Rasi. Pott handed the position over to Rasi in late 2011. However, a tribunal has now ruled the European Commission [EC] failed to follow proper recruitment procedures, triggering the annulment of Rasi’s appointment.

Having been forced to remove Rasi from his post, EMA named Pott as acting director. Pott faces the task of implementing new clinical trial legislation, managing the ongoing dispute over data transparency and getting changes to medical device regulation underway. The fallout from disputes in the EC about the oversight of EMA and the resignation of the director-general for health could also affect the regulator. Pott has faced similar circumstances in the past. When Pott last served as acting executive director, he dealt with reports Lönngren was advising pharmaceutical companies, a request from a politician to fire an EMA expert, and a spat with the Nordic Cochrane Centre over the publication of clinical trial data, the Finnegan’s Take blog summarized in 2012. On that occasion, Pott held the post for one year. At this stage it is unclear how long his latest spell will last.
And if you want to read some historical pulp fiction [that happens to be true], don’t miss the Finnegan’s Take blog mentioned above. One might think these matters might be handled with a bit more decorum, but alas, scientists and regulators are people too. As best I can tell, this has little to do with Dr. Rasi, though who knows what’s coming next? It hasn’t made much difference at the EMA so far. Here are a few of last week’s and today’s releases from the EMA:
and an article from Ed Silverman:
Will this have an impact of the EMA Data Transparency policy? I don’t know. All speculations welcome…
Mickey @ 2:41 PM

promises, promises…

Posted on Sunday 23 November 2014

I didn’t know it when I started blogging, but this blog isn’t about psychiatry, it’s about honesty in medical science. And right now, for me that reduces down to what has come to be called Data Transparency. The reason for my sabbatical these last several weeks has to do with a project working with the actual raw data behind a published Clinical Trial – hopefully something that will become increasingly common in the future because it’s clear that the space between the Clinical Trials themselves and many published articles has remained corrupted in spite of 50 years of successive reforms.
 
Industry funded Clinical Trials are often called "research," but a better term is "product testing" since lucrative profits are on the line depending on the outcome. In theory. the Clinical Trial is heavily structured. A Protocol defines how it will be conducted and what endpoint parameters will be analyzed to reach the final conclusions. The Subjects and Investigators are blinded to the medications until the blind is broken at the end of the actual trial and the results are analyzed by the Authors according to the a priori Protocol specifications.

We all know, that’s not what happens. The results are turned over to the corporate sponsor who analyze the results and use medical writers to produce the article. In many cases, the physicians and other scientists on the Author By-Line are primarily a ticket into a peer-reviewed journal, and have little if any input into the actual production of the article. And since the reader of the article can’t see any of the process that goes into the creation of what they’re reading, the data can be subtly manipulated to accentuate the positive and eliminate the negative. It not only can happen, it has happened in epidemic proportions – yielding enormous profits and more importantly has resulted in doctors prescribing many ineffective or sometimes harmful medications. The time for simply decrying this state of affairs has long passed. It’s time to put a stop to it, but that’s proving to be a very thorny task.

Every solution to this problem starts with Data Transparency. Unless science at large has access to every part of that top figure, the problem will persist. And the industry that stands to gain has mounted a sustained resistance at every point along the road. Their public arguments are commercially confident information and patient confidentiality, but that’s simply a smoke-screen for their real problem. Without the cover of darkness, many of the medications released in recent decades wouldn’t have made it to first base. There’s nothing scientific about the idea that new drugs are better, or that new drug discovery can proceed at a predictable pace. "A good man drug is hard to find. You often always get the other kind." – wouldn’t be a bad theme song for this industry.

In this last year, Data Transparency has finally made it to the front burner. This time last year, the European Medicines Agency [EMA] was well on the way to a sweeping policy change with unrestricted access to the data submitted to them, but as the year progressed, that move has been systematically eroded primarily by a coordinated campaign by industry in spite of a growing clamor supporting Data Transparency. The policy as it now stands is a watered-down version of its original promise. Now. the NIH and FDA have initiated a new program [speaking of about time!…] – taking aim at the problem with another promise:
NIH Director’s Blog
by Drs. Kathy L. Hudson and Francis S. Collins
November 19, 2014

When people enroll in clinical trials to test new drugs, devices, or other interventions, they’re often informed that such research may not benefit them directly. But they’re also told what’s learned in those clinical trials may help others, both now and in the future. To honor these participants’ selfless commitment to advancing biomedical science, researchers have an ethical obligation to share the results of clinical trials in a swift and transparent manner.

But that’s not the only reason why sharing data from clinical trials is so important. Prompt dissemination of clinical trial results is essential for guiding future research. Furthermore, resources can be wasted and people may even stand to be harmed if the results of clinical trials are not fully disclosed in a timely manner. Without access to complete information about previous clinical trials — including data that are negative or inconclusive, researchers may launch similar studies that put participants at needless risk or expose them to ineffective interventions. And, if conclusions are distorted by failure to report results, incomplete knowledge can eventually make its way into clinical guidelines and, thereby, affect the care of a great many patients. Unfortunately, the timely public reporting of results has not been consistent across the clinical trials enterprise. For example, a recent analysis of 400 U.S. clinical trials found that even four years after the trials had been completed, nearly 30% had failed to share results by publishing in a scientific journal or reporting in ClinicalTrials.gov, a public database maintained by NIH.

Today, the Department of Health and Human Services [HHS] proposed a rule to require public sharing of key results — the summary data — from certain clinical trials of drugs and devices regulated by the Food and Drug Administration [FDA]. While such a mandate has been in place for several years, the proposed rule aims to clarify the requirements. Summary data would include: baseline characteristics of participants, primary and secondary outcome results, and information about adverse events. With a few exceptions, such results must be submitted to ClinicalTrials.gov within one year of the time when the trial completes collection of primary outcome data. But we at NIH are proposing to go one step further to address this important issue. Today, NIH released a draft policy for public comment to apply these data reporting requirements to all interventional clinical trials that it funds. We are committed to working with NIH-supported researchers and institutions to ensure the new responsibilities in this proposed policy are understood and any unanticipated obstacles are removed.

What all this really comes down to is trust. Clinical trial participants trust that the data they provide will be used by biomedical science to advance the health of many. Researchers seek to add to the body of biomedical knowledge and can respect this commitment by promptly sharing clinical trial data. There is also the important matter of public trust. American taxpayers trust NIH to be a good steward of their investment — and one way we can do that is to ensure that results of all our clinical trials are shared with the worldwide scientific community in an open and efficient manner. Data sharing is essential to turn even more scientific discoveries into better health at an even faster pace!
Unfortunately, the clinicaltrials.gov results database isn’t comprehensive [see bring on the hoops! at least the ones that matter]. But it’s a hell of a lot better than nothing, and this NIH proposal initiates something that has been woefully lacking – enforcement. Looking back through the history of all the many reforms attempting to keep this business of drug testing scientific and honest, it looks almost like a comedy of errors. I don’t think that’s right. Most of the reforms seem to me to be serious attempts at getting the drug testing on the right path. Two things have been missing. The first is obvious – enforcement. With clinicaltrials.gov, they had a very right idea, but there was essentially no enforcement. The drug companies and others just ignored the law. That’s the really important part of this move by the Dr. Collins [Director of the NIH] if he’s serious and follows through. The second thing is ongoing monitoring. This history has been written by many well-meaning congressmen, reformers, and agencies, but they haven’t stayed at it. Industry is always at it. So the crises pass and drug companies just keep on doing what they’ve been doing as soon as the dust settles. It’s time for a new clamor – enforcement and ongoing monitoring. Without those two things, it’s all just more empty promises…

What is the Results Database?
The ClinicalTrials.gov results database was launched in September 2008 to implement Section 801 of the Food and Drug Administration Amendments Act of 2007 [FDAAA 801] [PDF], which requires the submission of "basic results" for certain clinical trials, generally not later than 1 year after their Completion Date [see Primary Completion Date on ClinicalTrials.gov]. The submission of adverse event information was optional when the results database was released but became required in September 2009. Results information for registered and completed studies is submitted by the study sponsor or principal investigator in a standard, tabular format without discussions or conclusions. The information is considered summary information and does not include patient-level data. The results information that is submitted includes the following:
  • Participant Flow. A tabular summary of the progress of participants through each stage of a study, by study arm or comparison group. It includes the numbers of participants who started, completed, and dropped out of each period of the study based on the sequence in which interventions were assigned.
  • Baseline Characteristics. A tabular summary of the data collected at the beginning of a study for all participants, by study arm or comparison group. These data include demographics, such as age and gender, and study-specific measures (for example, systolic blood pressure, prior antidepressant treatment).
  • Outcome Measures and Statistical Analyses. A tabular summary of  outcome measure values, by study arm or comparison group. It includes tables for each prespecified Primary Outcome and Secondary Outcome and may also include other prespecified outcomes, post hoc outcomes, and any appropriate statistical analyses.
  • Adverse Events. A tabular summary of all anticipated and unanticipated  serious adverse events and a tabular summary of anticipated and unanticipated other adverse events exceeding a specific frequency threshold. For each serious or other adverse event, it includes the adverse event term, affected organ system, number of participants at risk, and number of participants affected, by study arm or comparison group.
ClinicalTrials.gov staff review results submissions to ensure that they are clear and informative prior to posting to the Web site. However, ClinicalTrials.gov cannot ensure scientific accuracy. Data providers are responsible for ensuring that submitted information is accurate and complete.

Commenting on the NPRM and proposed NIH Policy
The public may comment on any aspect of the NPRM or proposed NIH Policy. Written comments on the NPRM should be submitted to docket number NIH-2011-0003 at www.regulations.gov [not there as of today?] Commenters are asked to indicate the specific section of the NPRM to which each comment refers. Written comments on the proposed NIH Policy should be submitted electronically to the Office of Clinical Research and Bioethics Policy, Office of Science Policy, NIH, via email at:
mail: at 6705 Rockledge Drive, Suite 750, Bethesda, MD 20892, or by fax: at 301-496-9839. The agency will consider all comments in preparing the final rule and final NIH Policy.

UPDATE: This is the working link for comments:

Mickey @ 6:13 PM