Clinical Trials


Last-Observation-Carried-Forward Imputation Method in Clinical Efficacy Trials: Review of 352 Antidepressant Studies
Pharmacotherapy
2009;29(12):1408–1416
by Stephen B. Woolley, D.Sc., Alex A. Cardoni, M.S.Pharm., and John W. Goethe, M.D.

Study Objective. To determine the prevalence, over 40 years, of using the last-observation-carried-forward (LOCF) imputation method in clinical trials, the association between use of LOCF and how the trials were conducted, and the extent of information about attrition and LOCF use in published reports.
Design. Retrospective analysis of the reports of randomized antidepressant efficacy trials published over a 40-year period (1965–2004).
Data Sources. MEDLINE database, Cochrane reviews, reference- and bibliography-based manual search, and publication list services. Measurements and Main Results. A total of 352 trials met the following criteria for analysis: antidepressant comparative efficacy trial, randomized design, patients with major depressive disorder, English-language article, published during 1965–2004, and first report of a trial. Design, attrition, and data analysis characteristics were recorded by investigators and trained assistants. Analyses included descriptive statistics of the trial size, duration, and number of patients who dropped out in LOCF versus non-LOCF studies, as well as the extent to which dropouts and the potential bias associated with attrition was discussed in the published report. The frequency of published antidepressant clinical trials increased from less than 1 trial/year (1965–1974) to 19 trials/year (1990–1994). Trials using the LOCF method were significantly larger than non-LOCF trials (p<0.01), and the proportion of subjects dropping out was significantly greater (p<0.05) in LOCF versus non-LOCF trials. The proportion of subjects dropping out remained relatively constant over time (~30%) but was significantly greater among LOCF (30.9%) than non-LOCF (28.8%) trials (p<0.01). The LOCF study articles were more likely to report dropouts, but only 7% of these articles reported outcomes recorded for subjects before they dropped out. Less than 16% of articles discussed bias associated with dropouts, 6.8% discussed the direction of bias, and only about 2% suggested the magnitude of the bias.
Conclusion. The percentage of clinical antidepressant trials using the LOCF method and the percentage of study subjects’ data imputed by using LOCF increased many-fold during 1965–2004. Published reports of trials provided little information to allow readers to assess possible bias introduced by use of the LOCF method.
Conclusions and Recommendations: This study showed that the use of the LOCF method increased over time among antidepressant trials reported in publications from 1965–2004, in terms of both the percentage of trials using the technique and the percentage of all subjects in antidepressant trials who were in trials that used LOCF analysis. Furthermore, coinciding with these trends, trials increased in size over this period. As a result, the percentage of all subjects in all antidepressant trials over this period who were in trials analyzed using the LOCF method was increased. Our analysis also showed that authors rarely provide extensive information to help readers assess the types and extent of possible bias introduced by use of LOCF analysis. The proportion of articles containing this type of information did not appreciably change over this period.

These results, and the known limitations of all techniques to perfectly adjust for attrition, support several recommendations. First, every effort should be made to minimize attrition. To attempt to reduce the number of dropouts, investigators can design studies with fewer assessments, use computer or mail assessments to reduce the burden of getting to in-person assessments, and build other incentives for continued participation, including monetary and service incentives. Alternatively, efforts can be made to recontact patients who drop out. With such information from a subset of recruited subjects, investigators can explore the range of alternative explanations of and conclusions from collected data. Second, once faced with attrition, investigators should make efforts to assess the pattern of missing data. Depending on the pattern, investigators should choose techniques other than the LOCF method (e.g., mixed-effect modeling, which is less likely to introduce substantial bias). Third, in all cases, authors should explicitly describe the pattern of dropouts in published reports. Furthermore, they should suggest a likely effect of dropouts, and explain how they reached their conclusions. Specifically, we recommend that evidence cited should be clearly presented by comparison group and include when subjects dropped out, subjects’ characteristics that might be associated with either future unrecorded outcomes or with reasons for leaving the study, the final recorded outcomes, and the trends in outcomes among dropouts. We further recommend that editors and publishers require inclusion of these elements in submitted manuscripts or from other sources such as Web sites.
Why Olanzapine Beats Risperidone, Risperidone Beats Quetiapine, and Quetiapine Beats Olanzapine: An Exploratory Analysis of Head-to-Head Comparison Studies of Second-Generation Antipsychotics
Am J Psychiatry 2006; 163:185–194
by Stephan Heres, M.D., John Davis, M.D., Katja Maino, M.D., Elisabeth Jetzinger, M.D., Werner Kissling, M.D., Stefan Leucht, M.D.
Objective: In many parts of the world, second-generation antipsychotics have largely replaced typical antipsychotics as the treatment of choice for schizophrenia. Consequently, trials comparing two drugs of this class—so-called  head-tohead studies—are gaining in relevance. The authors reviewed results of head-tohead studies of second-generation antipsychotics funded by pharmaceutical companies to determine if a relationship existed between the sponsor of the trial and the drug favored in the study’s overall outcome.
Method: The authors identified head-tohead comparison studies of second-generation antipsychotics through a MEDLINE search for the period from 1966 to September 2003 and identified additional head-to-head studies from selected conference proceedings for the period from 1999 to February 2004. The abstracts of all studies fully or partly funded by pharmaceutical companies were modified to mask the names and doses of the drugs used in the trial, and two physicians blinded to the study sponsor reviewed the abstracts and independently rated which drug was favored by the overall outcome measures. Two authors who were not blinded to the study sponsor reviewed the entire report of each study for sources of bias that could have affected the results in favor of the sponsor’s drug.
Results: Of the 42 reports identified by the authors, 33 were sponsored by a pharmaceutical company. In 90.0% of the studies, the reported overall outcome was in favor of the sponsor’s drug. This pattern resulted in contradictory conclusions across studies when the findings of studies of the same drugs but with different sponsors were compared. Potential sources of bias occurred in the areas of doses and dose escalation, study entry criteria and study populations, statistics and methods, and reporting of results and wording of findings.
Conclusions: Some sources of bias may limit the validity of head-to-head comparison studies of second-generation antipsychotics. Because most of the sources of bias identified in this review were subtle rather than compelling, the clinical usefulness of future trials may benefit from minor modifications to help avoid bias. The authors make a number of concrete suggestions for ways in which potential sources of bias can be addressed by study initiators, peer  reviewers of studies under consideration for publication, and readers of published studies.
Suggestions for Potential Improvement Given the unique opportunities of industry for organizing methodologically sound, large-scale trials, the association between outcome and sponsor found in the rating of abstracts in our study is unsatisfactory. We believe, however, that in the case of many of the problematic points raised in the Results section, relatively simple measures could improve the situation to an appreciable extent.
Sponsorship and outcome as reported in the abstract. Our results show that reading only the abstract of a study is insufficient for a complete understanding of the study findings. However, lack of time makes it difficult even for scientific experts to read all trial reports in detail. Therefore, peer reviewers of studies being considered for publication should pay close attention to the conclusions stated in study abstracts. Overall, we found that the structure of the abstracts in the current review adhered to widely accepted standards, but the selection of the results and the phrasing used to convey the results needed to be carefully scrutinized. To avoid bias in this crucial section of trial reporting, we suggest that peer reviewers verify whether the abstract really summarizes the overall results of the trial in a balanced way. Detailed guidelines in this area for peer reviewers would be useful.
Dose and dose escalation. In head-to-head trials, dose ranges and escalation schemes have a major effect on the outcome. To avoid potential bias, study initiators could ask the competitor to provide a suggested dose range and titration schedule for its compound, as the manufacturer of a drug knows its properties best. Alternatively, external experts could function as independent advisers, but they should then be named in the report as a source of information on the dosing regimen. In addition, responsible agencies such as the U.S. Food and Drug Administration (FDA) or the European Medicines Agency (EMEA) might be given the chance to look at the protocol before the study is begun in order to allow the correction of obvious flaws.
Entry criteria and study population. Regarding study population and inclusion criteria, study initiators should follow broadly accepted standards in the characterization of the eligible patients. Diagnostic validity is hardly ever mentioned in sponsored trials, and theoretically heterogeneous outcomes may be partly due to the heterogeneity of the study population. The use of structured clinical interviews may help identify the proper study population. For example, a characterization of patients with predominantly negative symptoms has been proposed. Defining a valid study population is essential in studies of patients with treatment-resistant illness that focus on the efficacy of antipsychotics, and other aspects of previous treatment discontinuation, such as medication intolerance, should not be used as alternative inclusion criteria. Otherwise it is unclear which aspect is related to the superiority of a compound.
Statistics and methods. A comprehensive assessment of the statistical methods applied in the studies we reviewed is beyond the scope of this article. We therefore comment only on two points that came up several times during our review. In the last 5 years, noninferiority designs have become more common, leading to a major problem with the threshold of equivalence. It is hardly acceptable to consider the lower margin of the 95% confidence interval at a level of only 60% of the efficacy of the competitor to be a sign of noninferiority. As the trend toward this type of statistical design is likely to endure, an expert consensus on methods for setting the thresholds is needed. Other confusing aspects include the use of various test methods and lack of the correction for multiple statistical tests in trials in which effects on cognitive function are examined. Recently, a guideline for standard test batteries for  measuring cognition became available, and it could soon be followed by a consensus on the statistical methods that should be used in this field of research. In general, study initiators should define outcome parameters a priori and choose the appropriate correction method for multiple testing. If the correction method is applied to a subset of tests only, this fact should be explained.
Reporting and wording of results. Wording and phrasing of study results are surely the most debatable sources of bias. The CONSORT (consolidated standards of reporting trials) statement, developed in the mid-1990s, proposed a checklist to ensure completeness of reporting and assessment of the validity of trial results. In addition, the International Committee of Medical Journal Editors set up a list of uniform requirements for manuscripts, including trial registration and complete reporting of all acquired data. The recommendations leave a considerable margin for wording and interpretation of the findings. Therefore, it is again the responsibility of peer reviewers for scientific journals to demand balanced reporting of the results. Readers of the trial reports should pay close attention to the choice of the primary outcome variables and to the presentation of the results in order to obtain a realistic impression of whether a new and unknown aspect of drug treatment, following the “uncertainty principle”), was observed or whether the study was designed to yield predictable results in favor of the sponsor’s drug. The  uncertainty principle states that a patient should be enrolled in a randomized, controlled trial only if there is  substantial uncertainty about which of the treatments would benefit the patient most. For example, the appropriateness of a trial focused on weight gain is debatable if a sponsor’s drug that is already known for its minor impact on weight is compared to a treatment previously shown to be highly likely to cause weight gain. The observation that only studies with significant findings tend to be published led Melander et al.  to coin the phrase “evidence b(i)ased medicine.” It is noteworthy that a guideline for “good publication practice” has been proposed to help avoid further publication bias. Each protocol registered with the European Clinical Trial Database is issued a unique number, making trials traceable and missing reports conspicuous. Unfortunately, access to this information is limited to the study initiator and EMEA staff. The international Current Controlled Trials metaregister (www.controlled-trials.com) combines national as well as disease-specific registers, and each trial included in the register is assigned a specific number. The U.S. Freedom of Information Act mandates publicly accessible “electronic reading rooms” for materials available through the Freedom of Information Act, such as, for example, information about studies registered with the FDA. However, in our experience, the registers are not easy to browse.
Poster reports and multiple publishing. Publication of findings on different aspects of the same trial in several reports has been criticized as the “salami strategy” of scientific reporting. This criticism may not always be justified, because it is simply not feasible to report in one publication all the data from a large trial with several aspects of interest or a huge sample size. Readers’ understanding of the different aspects covered by the study can be  enhanced if the masses of data are split into several reports. However, authors should always clearly state the source reference of the data that are presented. Otherwise, the reader might get the impression that several trials were undertaken, although in fact there was only one. A similar problem occurs if different researchers from the same trial are listed as the first author of various conference presentations or publications by the work group.  Because many scientists have only limited time and choose the abstract as the primary information source, the underlying core study should always be mentioned in the abstract. Moreover, data presented exclusively in conference poster sessions or symposia, which normally do not undergo peer review, must be considered problematic.
Those Who Have the Gold Make the Evidence: How the Pharmaceutical Industry Biases the Outcomes of Clinical Trials of Medications
Sci Eng Ethics
3 February 2011
by Joel Lexchin

Abstract Pharmaceutical companies fund the bulk of clinical research that is carried out on medications. Poor outcomes from these studies can have negativeeffects on sales of medicines. Previous research has shown that company fundedresearch is much more likely to yield positive outcomes than research with anyother sponsorship. The aim of this article is to investigate the possible ways inwhich bias can be introduced into research outcomes by drawing on concreteexamples from the published literature. Poorer methodology in industry-fundedresearch is not likely to account for the biases seen. Biases are introduced through avariety of measures including the choice of comparator agents, multiple publicationof positive trials and non-publication of negative trials, reinterpreting data submittedto regulatory agencies, discordance between results and conclusions, conflict-ofinterestleading to more positive conclusions, ghostwriting and the use of ‘‘seeding’’trials. Thus far, efforts to contain bias have largely focused on more stringent rulesregarding conflict-of-interest (COI) and clinical trial registries. There is no evidencethat any measures that have been taken so far have stopped the biasing of clinicalresearch and it’s not clear that they have even slowed down the process. Economictheory predicts that firms will try to bias the evidence base wherever its benefitsexceed its costs. The examples given here confirm what theory predicts. What willbe needed to curb and ultimately stop the bias that we have seen is a paradigm
Conclusion: In an unpublished paper the British economist Alan Maynard notes ‘‘Economic theory predicts that firms will invest in corruption of the evidence base wherever its benefits exceed its costs. If detection is costly for regulators, corruption of the evidence base can be expected to be extensive. Investment in biasing the evidence base, both clinical and economic, in pharmaceuticals is likely to be detailed and comprehensive, covering all aspects of the appraisal process. Such investment is likely to be extensive as the scientific and policy discourses are technical and esoteric, making detection difficult and expensive.’’ This article has shown that what Maynard predicted has a factual basis, pharmaceutical companies have used techniques leading to bias in the content of clinical research at every stage in its production. Defenders of the pharmaceutical industry have tried to minimize its role in biasing clinical research by pointing out that the pursuit of profits is not the only motivation for trying to influence the outcome and use of clinical research and that individuals, government and medical journals are equally guilty (Hirsch 2009). Hirsch is correct, bias can come from many sources, but no individual or organization has the resources and the ability to influence the entire process the way that the pharmaceutical industry can. In this respect, the industry is in a class of its own. We can reasonably ask that pharmaceutical companies not break the law in their pursuit of profits but anything beyond that is not realistic. There is no evidence that any measures that have been taken so far have stopped the biasing of clinical research and it’s not clear that they have even slowed down the process. What will be needed to curb and ultimately stop the bias is a paradigm change in the relationship between pharmaceutical companies and the conduct and reporting of clinical trials.