{"id":32880,"date":"2013-02-01T12:45:29","date_gmt":"2013-02-01T17:45:29","guid":{"rendered":"http:\/\/1boringoldman.com\/?p=32880"},"modified":"2013-02-01T20:59:30","modified_gmt":"2013-02-02T01:59:30","slug":"dodginess","status":"publish","type":"post","link":"https:\/\/1boringoldman.com\/index.php\/2013\/02\/01\/dodginess\/","title":{"rendered":"dodginess&#8230;"},"content":{"rendered":"<div>The article in the last post [<a href=\"http:\/\/1boringoldman.com\/index.php\/2013\/02\/01\/gone-missing\/\" target=\"_blank\">gone missing&hellip;<\/a>] compared the studies of the antidepressants submitted to the FDA with the ones that actually got published. But they looked at something else too, something beyond <em><strong><font color=\"#200020\">gone missing<\/font><\/strong><\/em>. They looked at the strength of the drug effect in the FDA version compared to the published versions. That moves us into the area of <em><strong><font color=\"#200020\">dodgy studies<\/font><\/strong><\/em> [studies that have been <em>dolled up<\/em>]. If I may quote myself [<a href=\"http:\/\/1boringoldman.com\/index.php\/2012\/04\/15\/an-anatomy-of-a-deceit-3-readin-and-writin-and-rithmetic\/\" target=\"_blank\">an anatomy of a deceit 3&hellip;<\/a>]:    <\/div>\n<blockquote>\n<div align=\"justify\"><sup><strong>Almost everyone knows what a <em><font color=\"#200020\">p<\/font><\/em> value is &ndash; how <em><font color=\"#200020\">p<\/font><\/em>robable is it that a difference is real and not just a sampling error? And we know <font color=\"#200020\"><em>p<\/em> &lt; 0.05<\/font> is good enough; <font color=\"#200020\"><em>p<\/em> &lt; 0.01<\/font> is real good; and <font color=\"#200020\"><em>p<\/em> &lt; 0.001<\/font>  is great&#8230; But all the <em><font color=\"#200020\">p<\/font><\/em>  value tells us is that there&rsquo;s a difference. It says nothing about the  strength of that difference. A cup of coffee helps some headaches; three  Aspirin tablets is a stronger remedy; and a narcotic shot is usually  definitive. All are significant, but there&rsquo;s a big difference in the  strength of the effect. There are three common ways to express the  strength of the effect mathematically: the <font color=\"#200020\">Effect Size<\/font>, the <font color=\"#200020\">Number Needed to Treat<\/font>; and the <font color=\"#200020\">Odds Ratio<\/font>. Here&rsquo;s just a word about each of them:<\/strong><\/sup><\/div>\n<ul><sup><strong>      <\/p>\n<li>\n<div align=\"justify\"><font color=\"#200020\"><u>Effect Size<\/u>: <\/font>It&rsquo;s  the difference in the mean values of the placebo group and the  treatment group divided by the overall standard deviation [a measure of  variability]. It makes intuitive sense. The greater the differences in  the group means, the stronger the effect. The more the variability, the  less the strength. Calculating it requires a lot of information and some  fancy formulas, but the concept is simple. The <em><font color=\"#200020\">greater<\/font><\/em> the Effect Size, the stronger the treatment effect.        <\/div>\n<\/li>\n<li>\n<div align=\"justify\"><font color=\"#200020\"><u>Number Needed to Treat<\/u>:<\/font>  This is figured differently. You need to know what proportion of  subjects in each group reached some predefined goal &ndash; like response or  remission. So if 5% of the placebo group got over their headache in 2  hours and 55% responded in the same period to Aspirin, the NNT would  equal <font color=\"#200020\">1 &divide; (0.55 &ndash; 0.05) = 1 &divide; 0.50 = 2<\/font>. The way you would say that is &quot;you need to treat two subjects to get one headache cure.&quot; Here, the <em><font color=\"#200020\">lower<\/font><\/em> the NNT, the stronger the treatment effect.        <\/div>\n<\/li>\n<li>\n<div align=\"justify\"><font color=\"#200020\"><u>Odds Ratio<\/u>: <\/font>The Odds Ratio uses the same parameters as the NNT. Using the above values: for placebo, the odds would be <font color=\"#200020\">0.05 &divide; 0.95 = 0.0526<\/font> of getting relief; for Aspirin, the odds would be <font color=\"#200020\">0.55 &divide; 0.45 = 1.22<\/font>. So the Odds Ratio is <font color=\"#200020\">1.22 &divide; 0.0526 = 23.2<\/font>. Obviously, the <em><font color=\"#200020\">greater<\/font><\/em> the Odds Ratio, the stronger the treatment effect.     <\/div>\n<\/li>\n<p>      <\/strong><\/sup><\/ul>\n<\/blockquote>\n<div align=\"justify\">Obviously, effect size is what a clinician really wants to know about a medication &#8211; is it likely to have a robust, clinically significant effect on the patient&#8217;s symptoms? But talking about <em><strong><font color=\"#200020\">effect size<\/font><\/strong><\/em> is easier than understanding it [<a target=\"_blank\" href=\"http:\/\/en.wikipedia.org\/wiki\/Effect_size\">1<\/a>][<a target=\"_blank\" href=\"http:\/\/www.uccs.edu\/lbecker\/effect-size.html\">2<\/a>][<a target=\"_blank\" href=\"http:\/\/www.leeds.ac.uk\/educol\/documents\/00002182.htm\">3<\/a>]. There&#8217;s Cohen&#8217;s <em>d<\/em> and Hedges&#8217; <em>g<\/em> and plenty else to make <em><strong><font color=\"#200020\">effect size<\/font><\/strong><\/em> the stuff of graduate school instead of blogs. For our purposes, 0.2 is a small effect, 0.5 is medium, and 0.8 is large. But all is debatable and without the primary raw data, it has to be calculated indirectly. So it&#8217;s mainly used in meta-analyses where it&#8217;s power lies &#8211; comparisons. And that&#8217;s what they did in the study reported in <a href=\"http:\/\/1boringoldman.com\/index.php\/2013\/02\/01\/gone-missing\/\" target=\"_blank\">gone missing&hellip;<\/a>. On the left, they compared the <em><strong><font color=\"#200020\">effect size<\/font><\/strong><\/em> between the unpublished and published studies. As you might expect, big differences, again showing the effect [and motive] of publication bias. But look on the right. Here&#8217;s what you&#8217;re seeing in their words:<\/div>\n<blockquote>\n<div align=\"justify\"><sup><strong>The effect-size values derived from the journal reports were often  greater than those derived from the FDA reviews. The difference between  these two sets of values was significant whether the studies (P=0.003)  or the drugs (P=0.012) were used as the units of analysis [see Table D  in the <a target=\"_blank\" href=\"http:\/\/www.nejm.org\/doi\/suppl\/10.1056\/NEJMsa065779\/suppl_file\/nejm_turner_252sa1.pdf\">Supplementary Appendix<\/a>].<\/strong><\/sup><\/div>\n<\/blockquote>\n<div align=\"justify\">The <em><strong><font color=\"#200020\">effect sizes<\/font><\/strong><\/em>:<\/div>\n<div align=\"center\"><img loading=\"lazy\" decoding=\"async\" width=\"520\" vspace=\"5\" height=\"474\" border=\"0\" src=\"http:\/\/1boringoldman.com\/images\/pub-bias-3.gif\" \/><\/div>\n<div align=\"justify\">So, to change metaphors for a moment, on the left <em><strong><font color=\"#200020\">sins of omission<\/font><\/strong><\/em>, on the right, <em><strong><font color=\"#200020\">sins of commission<\/font><\/strong><\/em>. I called the latter &quot;<em>another kind of publication bias [effect size inflation]<\/em>&quot; in the last post, but that&#8217;s not quite right. It&#8217;s just another example of why we need to insist on the raw data in <a href=\"http:\/\/www.alltrials.net\/\" target=\"_blank\"><strong><font color=\"#990000\">All Trials<\/font><\/strong><\/a> &#8211; another mechanism for creating <em><strong><font color=\"#200020\">dodgy studies<\/font><\/strong><\/em> [as if there weren&#8217;t enough already].<\/div>\n<p align=\"justify\">I mentioned that this same group did a similar study in 2012 on the Atypical Antipsychotics [<a href=\"http:\/\/1boringoldman.com\/index.php\/2012\/03\/21\/at-least-that-much\/\" target=\"_blank\">at least that much&hellip;<\/a>]. There weren&#8217;t so many studies obviously <em><strong><font color=\"#200020\">gone missing<\/font><\/strong><\/em> and the <em><strong><font color=\"#200020\">sins of commission<\/font><\/strong><\/em> weren&#8217;t so blatant. I&#8217;d love to say that since these are later studies, maybe things are improving integrity-wise. But I expect that it simply means that the Atypicals are more potent drugs than the Antidepressants. Their problem is in the area of toxicity rather than ineffectiveness. The <em><strong><font color=\"#200020\">dodginess<\/font><\/strong><\/em> jumped from efficacy to safety\/side effects. <\/p>\n<div align=\"justify\">Here are the comparable plots: <\/div>\n<div align=\"center\"><img loading=\"lazy\" decoding=\"async\" width=\"149\" vspace=\"5\" hspace=\"5\" height=\"312\" border=\"0\" src=\"http:\/\/1boringoldman.com\/images\/plos-2.gif\" \/>&nbsp;<img loading=\"lazy\" decoding=\"async\" width=\"260\" vspace=\"5\" hspace=\"5\" height=\"312\" border=\"0\" src=\"http:\/\/1boringoldman.com\/images\/plos-3.gif\" \/><\/div>\n<div align=\"justify\">I added these because the difference in effects sizes strikes me as right from my limited clinical experience using the Atypicals drugs&nbsp; in Schizophrenia. And it points out something &#8211; the drugs introduced later are less potent. <em>New<\/em> doesn&#8217;t mean <em>better<\/em> in the world of <em>me-too<\/em>&#8230;<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The article in the last post [gone missing&hellip;] compared the studies of the antidepressants submitted to the FDA with the ones that actually got published. But they looked at something else too, something beyond gone missing. They looked at the strength of the drug effect in the FDA version compared to the published versions. That [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[2],"tags":[],"class_list":["post-32880","post","type-post","status-publish","format-standard","hentry","category-politics"],"_links":{"self":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/posts\/32880","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/comments?post=32880"}],"version-history":[{"count":15,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/posts\/32880\/revisions"}],"predecessor-version":[{"id":32904,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/posts\/32880\/revisions\/32904"}],"wp:attachment":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/media?parent=32880"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/categories?post=32880"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/tags?post=32880"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}