{"id":5251,"date":"2011-03-03T02:01:21","date_gmt":"2011-03-03T07:01:21","guid":{"rendered":"http:\/\/1boringoldman.com\/index.php\/"},"modified":"2011-03-03T02:01:21","modified_gmt":"2011-03-03T07:01:21","slug":"clinical-trials","status":"publish","type":"page","link":"https:\/\/1boringoldman.com\/index.php\/clinical-trials\/","title":{"rendered":"Clinical Trials"},"content":{"rendered":"<br \/>\n<blockquote>\n<div align=\"center\"><strong><a href=\"http:\/\/www.ncbi.nlm.nih.gov\/pubmed\/19947800\" target=\"_blank\"><font color=\"#200020\">Last-Observation-Carried-Forward Imputation Method in Clinical Efficacy Trials: Review of 352 Antidepressant Studies<\/font><\/a><br \/>       <font color=\"#200020\">Pharmacotherapy<\/font><\/strong> 2009;29(12):1408&ndash;1416<br \/>       by Stephen B. Woolley, D.Sc., Alex A. Cardoni, M.S.Pharm., and John W. Goethe, M.D.<\/div>\n<p>  <\/p>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Study Objective.<\/font><\/strong><\/u>  To determine the prevalence, over 40 years, of using the  last-observation-carried-forward (LOCF) imputation method in clinical  trials, the association between use of LOCF and how the trials were  conducted, and the extent of information about attrition and LOCF use in  published reports.<\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Design.<\/font><\/strong><\/u> Retrospective analysis of the reports of randomized antidepressant efficacy trials published over a 40-year period (1965&ndash;2004).<\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Data Sources.<\/font><\/strong><\/u>  MEDLINE database, Cochrane reviews, reference- and bibliography-based  manual search, and publication list services. Measurements and Main  Results. A total of 352 trials met the following criteria for analysis:  antidepressant comparative efficacy trial, randomized design, patients  with major depressive disorder, English-language article, published  during 1965&ndash;2004, and first report of a trial. Design, attrition, and  data analysis characteristics were recorded by investigators and trained  assistants. Analyses included descriptive statistics of the trial size,  duration, and number of patients who dropped out in LOCF versus  non-LOCF studies, as well as the extent to which dropouts and the  potential bias associated with attrition was discussed in the published  report. The frequency of published antidepressant clinical trials  increased from less than 1 trial\/year (1965&ndash;1974) to 19 trials\/year  (1990&ndash;1994). Trials using the LOCF method were significantly larger than  non-LOCF trials (p&lt;0.01), and the proportion of subjects dropping  out was significantly greater (p&lt;0.05) in LOCF versus non-LOCF  trials. The proportion of subjects dropping out remained relatively  constant over time (~30%) but was significantly greater among LOCF  (30.9%) than non-LOCF (28.8%) trials (p&lt;0.01). The LOCF study  articles were more likely to report dropouts, but only 7% of these  articles reported outcomes recorded for subjects before they dropped  out. Less than 16% of articles discussed bias associated with dropouts,  6.8% discussed the direction of bias, and only about 2% suggested the  magnitude of the bias.<\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Conclusion.<\/font><\/strong><\/u>  The percentage of clinical antidepressant trials using the LOCF method  and the percentage of study subjects&rsquo; data imputed by using LOCF  increased many-fold during 1965&ndash;2004. Published reports of trials  provided little information to allow readers to assess possible bias  introduced by use of the LOCF method.<\/div>\n<\/blockquote>\n<div align=\"justify\"><strong><font color=\"#200020\">Conclusions and Recommendations:<\/font><\/strong>  This study showed that the use of the LOCF method increased over time  among antidepressant trials reported in publications from 1965&ndash;2004, in  terms of both the percentage of trials using the technique and the  percentage of all subjects in antidepressant trials who were in trials  that used LOCF analysis. Furthermore, coinciding with these trends,  trials increased in size over this period. As a result, the percentage  of all subjects in all antidepressant trials over this period who were  in trials analyzed using the LOCF method was increased. Our analysis  also showed that authors rarely provide extensive information to help  readers assess the types and extent of possible bias introduced by use  of LOCF analysis. The proportion of articles containing this type of  information did not appreciably change over this period.<\/div>\n<p>   <\/p>\n<div align=\"justify\">These  results, and the known limitations of all techniques to perfectly  adjust for attrition, support several recommendations. First, every  effort should be made to minimize attrition. To attempt to reduce the  number of dropouts, investigators can design studies with fewer  assessments, use computer or mail assessments to reduce the burden of  getting to in-person assessments, and build other incentives for  continued participation, including monetary and service incentives.  Alternatively, efforts can be made to recontact patients who drop out.  With such information from a subset of recruited subjects, investigators  can explore the range of alternative explanations of and conclusions  from collected data. Second, once faced with attrition, investigators  should make efforts to assess the pattern of missing data. Depending on  the pattern, investigators should choose techniques other than the LOCF  method (e.g., mixed-effect modeling, which is less likely to introduce  substantial bias). Third, in all cases, authors should explicitly  describe the pattern of dropouts in published reports. Furthermore, they  should suggest a likely effect of dropouts, and explain how they  reached their conclusions. Specifically, we recommend that evidence  cited should be clearly presented by comparison group and include when  subjects dropped out, subjects&rsquo; characteristics that might be associated  with either future unrecorded outcomes or with reasons for leaving the  study, the final recorded outcomes, and the trends in outcomes among  dropouts. We further recommend that editors and publishers require  inclusion of these elements in submitted manuscripts or from other  sources such as Web sites.<\/div>\n<blockquote>\n<div align=\"center\"><a target=\"_blank\" href=\"http:\/\/www.ncbi.nlm.nih.gov\/pubmed?term=Why%20Olanzapine%20Beats%20Risperidone%2C\"><strong><font color=\"#200020\">Why  Olanzapine Beats Risperidone, Risperidone Beats Quetiapine, and  Quetiapine Beats Olanzapine: An Exploratory Analysis of Head-to-Head  Comparison Studies of Second-Generation Antipsychotics<\/font><\/strong><\/a><br \/>   <strong><font color=\"#200020\">Am J Psychiatry<\/font><\/strong> 2006; 163:185&ndash;194<br \/>   by  Stephan Heres, M.D., John Davis, M.D., Katja Maino, M.D., Elisabeth  Jetzinger, M.D., Werner Kissling, M.D., Stefan Leucht, M.D. <\/div>\n<div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Objective:<\/font><\/strong><\/u>  In many parts of the world, second-generation antipsychotics have  largely replaced typical antipsychotics as the treatment of choice for  schizophrenia. Consequently, trials comparing two drugs of this  class&mdash;so-called&nbsp; head-tohead studies&mdash;are gaining in relevance. The  authors reviewed results of head-tohead studies of second-generation  antipsychotics funded by pharmaceutical companies to determine if a  relationship existed between the sponsor of the trial and the drug  favored in the study&rsquo;s overall outcome.   <\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Method:<\/font><\/strong><\/u>  The authors identified head-tohead comparison studies of  second-generation antipsychotics through a MEDLINE search for the period  from 1966 to September 2003 and identified additional head-to-head  studies from selected conference proceedings for the period from 1999 to  February 2004. The abstracts of all studies fully or partly funded by  pharmaceutical companies were modified to mask the names and doses of  the drugs used in the trial, and two physicians blinded to the study  sponsor reviewed the abstracts and independently rated which drug was  favored by the overall outcome measures. Two authors who were not  blinded to the study sponsor reviewed the entire report of each study  for sources of bias that could have affected the results in favor of the  sponsor&rsquo;s drug.   <\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Results:<\/font><\/strong><\/u>  Of the 42 reports identified by the authors, 33 were sponsored by a  pharmaceutical company. In 90.0% of the studies, the reported overall  outcome was in favor of the sponsor&rsquo;s drug. This pattern resulted in  contradictory conclusions across studies when the findings of studies of  the same drugs but with different sponsors were compared. Potential  sources of bias occurred in the areas of doses and dose escalation,  study entry criteria and study populations, statistics and methods, and  reporting of results and wording of findings.   <\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Conclusions:<\/font><\/strong><\/u>  Some sources of bias may limit the validity of head-to-head comparison  studies of second-generation antipsychotics. Because most of the sources  of bias identified in this review were subtle rather than compelling,  the clinical usefulness of future trials may benefit from minor  modifications to help avoid bias. The authors make a number of concrete  suggestions for ways in which potential sources of bias can be addressed  by study initiators, peer&nbsp; reviewers of studies under consideration for  publication, and readers of published studies.<\/div>\n<\/div>\n<\/blockquote>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Suggestions for Potential Improvement<\/font><\/strong><\/u>  Given the unique opportunities of industry for organizing  methodologically sound, large-scale trials, the association between  outcome and sponsor found in the rating of abstracts in our study is  unsatisfactory. We believe, however, that in the case of many of the  problematic points raised in the Results section, relatively simple  measures could improve the situation to an appreciable extent.   <\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Sponsorship and outcome as reported in the abstract.<\/font><\/strong><\/u>  Our results show that reading only the abstract of a study is  insufficient for a complete understanding of the study findings.  However, lack of time makes it difficult even for scientific experts to  read all trial reports in detail. Therefore, peer reviewers of studies  being considered for publication should pay close attention to the  conclusions stated in study abstracts. Overall, we found that the  structure of the abstracts in the current review adhered to widely  accepted standards, but the selection of the results and the phrasing  used to convey the results needed to be carefully scrutinized. To avoid  bias in this crucial section of trial reporting, we suggest that peer  reviewers verify whether the abstract really summarizes the overall  results of the trial in a balanced way. Detailed guidelines in this area  for peer reviewers would be useful.   <\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Dose and dose escalation.<\/font><\/strong><\/u>  In head-to-head trials, dose ranges and escalation schemes have a major  effect on the outcome. To avoid potential bias, study initiators could  ask the competitor to provide a suggested dose range and titration  schedule for its compound, as the manufacturer of a drug knows its  properties best. Alternatively, external experts could function as  independent advisers, but they should then be named in the report as a  source of information on the dosing regimen. In addition, responsible  agencies such as the U.S. Food and Drug Administration (FDA) or the  European Medicines Agency (EMEA) might be given the chance to look at  the protocol before the study is begun in order to allow the correction  of obvious flaws.   <\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Entry criteria and study population.<\/font><\/strong><\/u>  Regarding study population and inclusion criteria, study initiators  should follow broadly accepted standards in the characterization of the  eligible patients. Diagnostic validity is hardly ever mentioned in  sponsored trials, and theoretically heterogeneous outcomes may be partly  due to the heterogeneity of the study population. The use of structured  clinical interviews may help identify the proper study population. For  example, a characterization of patients with predominantly negative  symptoms has been proposed. Defining a valid study population is  essential in studies of patients with treatment-resistant illness that  focus on the efficacy of antipsychotics, and other aspects of previous  treatment discontinuation, such as medication intolerance, should not be  used as alternative inclusion criteria. Otherwise it is unclear which  aspect is related to the superiority of a compound.   <\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Statistics and methods.<\/font><\/strong><\/u>  A comprehensive assessment of the statistical methods applied in the  studies we reviewed is beyond the scope of this article. We therefore  comment only on two points that came up several times during our review.  In the last 5 years, noninferiority designs have become more common,  leading to a major problem with the threshold of equivalence. It is  hardly acceptable to consider the lower margin of the 95% confidence  interval at a level of only 60% of the efficacy of the competitor to be a  sign of noninferiority. As the trend toward this type of statistical  design is likely to endure, an expert consensus on methods for setting  the thresholds is needed. Other confusing aspects include the use of  various test methods and lack of the correction for multiple statistical  tests in trials in which effects on cognitive function are examined.  Recently, a guideline for standard test batteries for&nbsp; measuring  cognition became available, and it could soon be followed by a consensus  on the statistical methods that should be used in this field of  research. In general, study initiators should define outcome parameters a  priori and choose the appropriate correction method for multiple  testing. If the correction method is applied to a subset of tests only,  this fact should be explained.   <\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Reporting and wording of results.<\/font><\/strong><\/u>  Wording and phrasing of study results are surely the most debatable  sources of bias. The CONSORT (consolidated standards of reporting  trials) statement, developed in the mid-1990s, proposed a checklist to  ensure completeness of reporting and assessment of the validity of trial  results. In addition, the International Committee of Medical Journal  Editors set up a list of uniform requirements for manuscripts, including  trial registration and complete reporting of all acquired data. The  recommendations leave a considerable margin for wording and  interpretation of the findings. Therefore, it is again the  responsibility of peer reviewers for scientific journals to demand  balanced reporting of the results. Readers of the trial reports should  pay close attention to the choice of the primary outcome variables and  to the presentation of the results in order to obtain a realistic  impression of whether a new and unknown aspect of drug treatment,  following the &ldquo;uncertainty principle&rdquo;), was observed or whether the  study was designed to yield predictable results in favor of the  sponsor&rsquo;s drug. The&nbsp; uncertainty principle states that a patient should  be enrolled in a randomized, controlled trial only if there is&nbsp;  substantial uncertainty about which of the treatments would benefit the  patient most. For example, the appropriateness of a trial focused on  weight gain is debatable if a sponsor&rsquo;s drug that is already known for  its minor impact on weight is compared to a treatment previously shown  to be highly likely to cause weight gain. The observation that only  studies with significant findings tend to be published led Melander et  al.&nbsp; to coin the phrase &ldquo;evidence b(i)ased medicine.&rdquo; It is noteworthy  that a guideline for &ldquo;good publication practice&rdquo; has been proposed to  help avoid further publication bias. Each protocol registered with the  European Clinical Trial Database is issued a unique number, making  trials traceable and missing reports conspicuous. Unfortunately, access  to this information is limited to the study initiator and EMEA staff.  The international Current Controlled Trials metaregister  (www.controlled-trials.com) combines national as well as  disease-specific registers, and each trial included in the register is  assigned a specific number. The U.S. Freedom of Information Act mandates  publicly accessible &ldquo;electronic reading rooms&rdquo; for materials available  through the Freedom of Information Act, such as, for example,  information about studies registered with the FDA. However, in our  experience, the registers are not easy to browse.<\/div>\n<div align=\"justify\"><u><strong><font color=\"#200020\">Poster reports and multiple publishing.<\/font><\/strong><\/u>  Publication of findings on different aspects of the same trial in  several reports has been criticized as the &ldquo;salami strategy&rdquo; of  scientific reporting. This criticism may not always be justified,  because it is simply not feasible to report in one publication all the  data from a large trial with several aspects of interest or a huge  sample size. Readers&rsquo; understanding of the different aspects covered by  the study can be&nbsp; enhanced if the masses of data are split into several  reports. However, authors should always clearly state the source  reference of the data that are presented. Otherwise, the reader might  get the impression that several trials were undertaken, although in fact  there was only one. A similar problem occurs if different researchers  from the same trial are listed as the first author of various conference  presentations or publications by the work group.&nbsp; Because many  scientists have only limited time and choose the abstract as the primary  information source, the underlying core study should always be  mentioned in the abstract. Moreover, data presented exclusively in  conference poster sessions or symposia, which normally do not undergo  peer review, must be considered problematic.<\/div>\n<blockquote>\n<div align=\"center\"><strong><font color=\"#200020\">Those Who Have the Gold Make the Evidence: How the Pharmaceutical Industry Biases the Outcomes of Clinical Trials of Medications<br \/> Sci Eng Ethics<\/font><\/strong> 3 February 2011<br \/> by Joel Lexchin<\/div>\n<p> <\/p>\n<div align=\"justify\"><em><strong><font color=\"#200020\">Abstract<\/font><\/strong><\/em> Pharmaceutical companies fund the bulk of clinical research that is carried out on medications. Poor outcomes from these studies can have negativeeffects on sales of medicines. Previous research has shown that company fundedresearch is much more likely to yield positive outcomes than research with anyother sponsorship. The aim of this article is to investigate the possible ways inwhich bias can be introduced into research outcomes by drawing on concreteexamples from the published literature. Poorer methodology in industry-fundedresearch is not likely to account for the biases seen. Biases are introduced through avariety of measures including the choice of comparator agents, multiple publicationof positive trials and non-publication of negative trials, reinterpreting data submittedto regulatory agencies, discordance between results and conclusions, conflict-ofinterestleading to more positive conclusions, ghostwriting and the use of &lsquo;&lsquo;seeding&rsquo;&rsquo;trials. Thus far, efforts to contain bias have largely focused on more stringent rulesregarding conflict-of-interest (COI) and clinical trial registries. There is no evidencethat any measures that have been taken so far have stopped the biasing of clinicalresearch and it&rsquo;s not clear that they have even slowed down the process. Economictheory predicts that firms will try to bias the evidence base wherever its benefitsexceed its costs. The examples given here confirm what theory predicts. What willbe needed to curb and ultimately stop the bias that we have seen is a paradigm<\/div>\n<\/blockquote>\n<div align=\"justify\"><strong><font color=\"#200020\">Conclusion<\/font><\/strong>: In an unpublished paper the British economist Alan Maynard notes &lsquo;&lsquo;Economic theory predicts that firms will invest in corruption of the evidence base wherever its benefits exceed its costs. If detection is costly for regulators, corruption of the evidence base can be expected to be extensive. Investment in biasing the evidence base, both clinical and economic, in pharmaceuticals is likely to be detailed and comprehensive, covering all aspects of the appraisal process. Such investment is likely to be extensive as the scientific and policy discourses are technical and esoteric, making detection difficult and expensive.&rsquo;&rsquo; This article has shown that what Maynard predicted has a factual basis, pharmaceutical companies have used techniques leading to bias in the content of clinical research at every stage in its production. Defenders of the pharmaceutical industry have tried to minimize its role in biasing clinical research by pointing out that the pursuit of profits is not the only motivation for trying to influence the outcome and use of clinical research and that individuals, government and medical journals are equally guilty (Hirsch 2009). Hirsch is correct, bias can come from many sources, but no individual or organization has the resources and the ability to influence the entire process the way that the pharmaceutical industry can. In this respect, the industry is in a class of its own. We can reasonably ask that pharmaceutical companies not break the law in their pursuit of profits but anything beyond that is not realistic. There is no evidence that any measures that have been taken so far have stopped the biasing of clinical research and it&rsquo;s not clear that they have even slowed down the process. What will be needed to curb and ultimately stop the bias is a paradigm change in the relationship between pharmaceutical companies and the conduct and reporting of clinical trials.<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Last-Observation-Carried-Forward Imputation Method in Clinical Efficacy Trials: Review of 352 Antidepressant Studies Pharmacotherapy 2009;29(12):1408&ndash;1416 by Stephen B. Woolley, D.Sc., Alex A. Cardoni, M.S.Pharm., and John W. Goethe, M.D. Study Objective. To determine the prevalence, over 40 years, of using the last-observation-carried-forward (LOCF) imputation method in clinical trials, the association between use of LOCF and how [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"open","ping_status":"open","template":"","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"class_list":["post-5251","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/pages\/5251","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/comments?post=5251"}],"version-history":[{"count":5,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/pages\/5251\/revisions"}],"predecessor-version":[{"id":5288,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/pages\/5251\/revisions\/5288"}],"wp:attachment":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/media?parent=5251"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}