by Susanna Every-Palmer and Jeremy HowickJournal of Evaluation in Clinical Practice. 2014 May 12. [Epub ahead of print]
Evidence-based medicine [EBM] was announced in the early 1990s as a ‘new paradigm’ for improving patient care. Yet there is currently little evidence that EBM has achieved its aim. Since its introduction, health care costs have increased while there remains a lack of high-quality evidence suggesting EBM has resulted in substantial population-level health gains. In this paper we suggest that EBM’s potential for improving patients’ health care has been thwarted by bias in the choice of hypotheses tested, manipulation of study design and selective publication. Evidence for these flaws is clearest in industry-funded studies. We argue EBM’s indiscriminate acceptance of industry-generated ‘evidence’ is akin to letting politicians count their own votes. Given that most intervention studies are industry funded, this is a serious problem for the overall evidence base. Clinical decisions based on such evidence are likely to be misinformed, with patients given less effective, harmful or more expensive treatments. More investment in independent research is urgently required. Independent bodies, informed democratically, need to set research priorities. We also propose that evidence rating schemes are formally modified so research with conflict of interest bias is explicitly downgraded in value.hat tip to pharmagossip…
It’s about integrating individual clinical expertise and the best external evidenceby David L Sackett , William M C Rosenberg , JA Muir Gray , R Brian Haynes , and W Scott RichardsonBritish Medical Journal. 1996 312:71-72.
Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice… By best available external clinical evidence we mean clinically relevant research, often from the basic sciences of medicine, but especially from patient centred clinical research into the accuracy and precision of diagnostic tests [including the clinical examination], the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens…
Good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough. Without clinical expertise, practice risks becoming tyrannised by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient…
Evidence based medicine is not “cookbook” medicine. Because it requires a bottom up approach that integrates the best external evidence with individual clinical expertise and patients’ choice, it cannot result in slavish, cookbook approaches to individual patient care. External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision. Similarly, any external guideline must be integrated with individual clinical expertise in deciding whether and how it matches the patient’s clinical state, predicament, and preferences, and thus whether it should be applied. Clinicians who fear top down cookbooks will find the advocates of evidence based medicine joining them at the barricades…
Some fear that evidence based medicine will be hijacked by purchasers and managers to cut the costs of health care. This would not only be a misuse of evidence based medicine but suggests a fundamental misunderstanding of its financial consequences…Evidence based medicine is not restricted to randomised trials and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions… It is when asking questions about therapy that we should try to avoid the non-experimental approaches, since these routinely lead to false positive conclusions about efficacy. Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the “gold standard” for judging whether a treatment does more good than harm…
If EBM were the revolutionary movement it was hailed as, we would expect more than benefits demonstrated in specific cases. We would expect population-level health gains, such as those that occurred after the introduction of antibiotics, improved sanitation and smoking cessation. Unfortunately, there is little evidence that EBM has had such effects.
The story so far suggests improved patient outcomes and EBM’s ability to identify superior treatments to replace less effective alternatives. However, the reality is different. Ten years after atypicals had saturated the market, large independent trials known by the acronyms CATIE [Clinical Antipsychotic Trials of Inter- vention Effectiveness], CUtLASS [Cost Utility of the Latest Antipsychotic Drugs in Schizophrenia Study], and EUFEST [European First Episode Study] have demonstrated that the atypical agents are in fact no more effective, no better tolerated and are less cost effective than their typical predecessors.
In relation to depression, independent meta-analyses pooling unpublished as well as published data now show that SSRIs are no more effective than placebo in treating mild-to-moderate depression, the condition for which they have been most commonly prescribedSo how is it that for over a decade we were convinced by the evidence into thinking these treatments were superior? How could there have been ‘an evidence myth constructed from a thousand randomized trials’ and how did we fall for it?
It is beyond the scope of this paper to discuss practical solutions in great detail, however, we make the following suggestions:
- The sensible campaign to formalize and enforce measures ensuring the registration and reporting of all clinical trials [see http:// www.alltrials.net/] should be supported – otherwise trials that do not give the answer industry wants will remain unpublished.
- More investment in independent research is required. As we have described, it is a false economy to indirectly finance industry-funded research through the high costs of patented pharmaceuticals.
- Independent bodies, informed democratically, need to set research priorities.
- Individuals and institutions conducting independent studies should be rewarded by the methodological quality of their studies and not by whether they manage to get a positive result [a ‘negative’ study is as valuable as a ‘positive’ one from a scientific point of view].
- Risk of bias assessment instruments such as the Cochrane risk of bias tool should be amended to include funding source as an independent item.
- Evidence-ranking schemes need to be modified to take the evidence about industry bias into account. There are already mechanisms within EBM evidence-ranking schemes to up- or downgrade evidence based on risk of bias. For example, the Grading of Recommendation Assessment, Development and Evaluation [GRADE] system allows for upgrading observational evidence.