{"id":49624,"date":"2014-09-11T23:01:49","date_gmt":"2014-09-12T03:01:49","guid":{"rendered":"http:\/\/1boringoldman.com\/?p=49624"},"modified":"2014-09-12T07:48:48","modified_gmt":"2014-09-12T11:48:48","slug":"about-my-connectomes","status":"publish","type":"post","link":"https:\/\/1boringoldman.com\/index.php\/2014\/09\/11\/about-my-connectomes\/","title":{"rendered":"about my <em>connectomes<\/em>&#8230;"},"content":{"rendered":"\n<p align=\"justify\" class=\"small\"><em>While I haven&#8217;t thought about it very much, I made a move from the hardest of medical sciences to the softest without any transition. The first time around was in a lab with scintillation counters printing data to punch cards to feed into Fortran programs that cranked out ANOVA with p values. And then I was in the world of psychotherapy where there was little in the way of a control group [or for that matter &#8211; any groups], and validation was subjective at best &#8211; only clinical. There&#8217;s a gulf there that seems like it needs some of those connectomes Dr. Insel loves to talk about. <\/em><em><a href=\"http:\/\/www.nimh.nih.gov\/about\/director\/2010\/tracing-the-brains-connections.shtml\" target=\"_blank\"><img decoding=\"async\" width=\"180\" hspace=\"4\" align=\"right\" border=\"0\" src=\"http:\/\/1boringoldman.com\/images\/connectome.gif\" \/><\/a><\/em><em>But apparently I&#8217;m not the only person around with that kind of connectome problem. Many of my colleagues seem to obsess about clinical trials and their p values without addressing what matters &#8211; clinical relevance. So what! if a drug group in a clinical trial statistically separates from the placebo group if the people treated don&#8217;t even notice the difference. Large groups don&#8217;t come to our offices, individuals come. So the question is how to take those numbers that are generated in a clinical trial and turn them into something that really matters to the patients and doctors that inhabit those offices. Likely almost anyone reading this already knows what I&#8217;m about to say, but I&#8217;m about to say it anyway. So this is that post you might want to skip&#8230;<\/em><\/p>\n<div align=\"justify\" class=\"small\">This is the part I want to mention from <a target=\"_blank\" href=\"http:\/\/bjp.rcpsych.org\/content\/203\/3\/179.full\">Agomelatine efficacy and acceptability revisited: systematic review and meta-analysis of published and unpublished randomised trials<\/a>. As I said above, it may be old hat to many, but it&#8217;s still on the growth edge of my own understanding &#8211; how to get those numbers into something that has to do with clinical relevance &#8211; something that matters to actual people:              <\/div>\n<ul><font color=\"#200020\"><span class=\"small\"><\/p>\n<div align=\"justify\"><strong><font color=\"#660033\">The present systematic review found that acute treatment  with agomelatine is associated with a difference of 1.5 points on                         the HRSD. This difference was statistically  significant, although the clinical relevance of this small effect is  questionable.                         No research evidence or consensus is available  about what constitutes a clinically meaningful difference in HRSD  scores. Antidepressant                         research has recently faced the issues of [a] a  large number of studies reporting negative findings and [b] a possible  increase                         in placebo response rates, which may be caused  by changes in selection of study participants and how studies are  conducted.  Such changes might contribute to a reduction in the likelihood of  identifying drug effectiveness in antidepressant drug trials.                                                  However, even with this consideration in mind,  it is plausible to agree with one of the agomelatine clinical trials that a difference of less than three HRSD points is unlikely to be clinically meaningful. Other publications have discussed                         a difference of two points as being clinically important,  but the effect of agomelatine in our review was also below this  threshold. Furthermore, it cannot be excluded that a 1.5-point                         difference may reflect a weak effect on  sleep-regulating mechanisms rather than a genuine antidepressant effect.                      <\/font><\/strong><\/div>\n<p>                                         <\/p>\n<div align=\"justify\">In a recent statement, the EMA  Committee for Medicinal Products for Human Use [CHMP] pointed out that,  in addition to a statistically                         significant effect in symptom scale scores, the  clinical relevance has to be confirmed by responder and remitter  analyses                         and that &lsquo;&#8230; results in the short-term trials  need to be confirmed in clinical trials, to demonstrate the maintenance  of                         effects&rsquo;.  For dichotomous outcomes, agomelatine was not superior to placebo in  terms of relapse and remission rates, but was statistically                         superior to placebo in terms of response rates.  The difference in response rates corresponds to an absolute risk  difference                         of 6% and to a number needed to treat [NNT] of  15. Based on an analysis of regulatory submissions,  which found an average difference of 16% in the response rates between  common antidepressants and placebo, EMA CHMP states                         that this difference &lsquo;&#8230; is considered to be  the lower limit of the pharmacological effect that would be expected in  clinical                         practice.&rsquo; Other authors  considered an NNT of ten or below as clinically relevant. Clearly, the  effect size in the present analysis is of doubtful                         clinical significance. This point is  strengthened by the fact that depression is a clinical condition for  which many active                         antidepressants are already available.<\/div>\n<p><\/span><\/font><\/ul>\n<div align=\"justify\" class=\"small\"><img decoding=\"async\" width=\"175\" hspace=\"4\" align=\"right\" border=\"0\" src=\"http:\/\/1boringoldman.com\/images\/stats-1.gif\" \/>Over there on the right is our old friend, Mr. Normal Distribution. If you take a large sample of almost anything, the values will look like this with the measurement variable across the bottom [abscissa] and the frequency of the values on the left axis [ordinate]. So you can describe a dataset that fits this distribution [most of them] with just three numbers: <strong><font color=\"#200020\">&mu;<\/font><\/strong> [the mean, average]; <strong><font color=\"#200020\">n<\/font><\/strong> [the number of observations]; and <strong><font color=\"#200020\">&sigma;<\/font><\/strong> [the standard deviation, an index of how much variability there is within the data]. 95% of the values fall between two standard deviations on either side of the mean. That means that measurements outside those limits have only a 5% chance of belonging to this group &#8211; thus <em><strong><font color=\"#200020\">p<\/font><\/strong><\/em> &lt; 0.05 [<em><strong><font color=\"#200020\">p <\/font><\/strong><\/em>as in <em>probability<\/em>].            <\/div>\n<p align=\"justify\" class=\"small\"><img decoding=\"async\" vspace=\"4\" hspace=\"4\" align=\"left\" border=\"0\" src=\"http:\/\/1boringoldman.com\/images\/stats-2.gif\" \/>In the examples on the left, all three might be statistically significant but looking at the <em><strong><font color=\"#200020\">p <\/font><\/strong><\/em>value won&#8217;t tell you anything about the likelihood it will be a property noticed by individual patients. It might be simple a chemical finding that&#8217;s imperceptible, or it might be a power-house. But <em><strong><font color=\"#200020\">p<\/font><\/strong><\/em> doesn&#8217;t tell you that. Regulatory bodies, like the Food and Drug Administration [FDA], were established to keep inactive <em>patent medicines<\/em> off the medical market, not to direct drug usage or attest to the magnitude of their effects. So the FDA insists on two well conducted Clinical Trials that demonstrate statistically significant differences between the drug and a placebo. The main task of the FDA Approval process is safety &#8211; what are the drug&#8217;s potential harms? Efficacy is a soft standard added on in 1962.<\/p>\n<div align=\"justify\" class=\"small\">There are mathematical ways to use the data generated in a Clinical Trial to get at something more worth knowing &#8211; How strong is the effect of the drug? Just looking at the figures on the left demonstrates one such method &#8211; the <strong><font color=\"#200020\">Mean Difference<\/font><\/strong> between the two groups. How far apart are their <strong><font color=\"#200020\">&mu;<\/font><\/strong> values? measured in this example as units on the HAM-D Scale. That could be used to compare different studies if they all used the HAM-D [but they don&#8217;t]. Another problem: What if the variability [<strong><font color=\"#200020\">&sigma;<\/font><\/strong>] of one group is very different from the other group? So they came up with a way to correct the <strong><font color=\"#200020\">Mean Difference<\/font><\/strong> by dividing it by the variance [of the whole sample]&#8230;<\/div>\n<div align=\"center\"><strong><font color=\"#200020\">(&mu;<sub>1<\/sub> &#8211; &mu;<sub>2<\/sub>) &divide; &sigma;<sub>1+2<\/sub><\/font><\/strong><\/div>\n<div align=\"justify\" class=\"small\">&#8230; converting it into <em>standard deviation units<\/em>. It&#8217;s called the <strong><font color=\"#200020\">Standardized Mean Difference<\/font><\/strong> or <strong><font color=\"#200020\">Cohen&#8217;s <em>d<\/em> <\/font><\/strong>or sometimes something else. It can be used for divergent populations or even studies using different rating scales if they purport to measure the same parameter [eg HAM-D, MADRS, CDRS-R, K-SADS-L &#8211; all depression scales]. These are variants of measurements of the <strong><font color=\"#200020\">Effect Size<\/font><\/strong> &#8211; How strong is the effect of the measured difference? <\/div>\n<p align=\"justify\" class=\"small\"><img loading=\"lazy\" decoding=\"async\" width=\"180\" vspace=\"4\" hspace=\"4\" height=\"194\" align=\"right\" border=\"0\" src=\"http:\/\/1boringoldman.com\/images\/stats-3.gif\" \/>There are no absolute standards for a meaningful <strong><font color=\"#200020\">Standardized Mean Difference<\/font><\/strong>. Cohen suggested 0.25 = weak; 0.50 = moderate; 0.75 = strong. But you could gather a bunch of statisticians and they could argue about that for the whole evening. The place where <strong><font color=\"#200020\">Effect Sizes<\/font><\/strong> are routinely used is in meta-analyses that compared multiple studies and\/or multiple drugs. I guess you could say it&#8217;s a powerful <em>relativity<\/em> tool, often combined with a representation of the 95% Confidence Interval. In the example on the right from the article quoted above, the <strong><font color=\"#200020\">Standardized Mean Difference<\/font><\/strong> is on the Abscissa with the Confidence Intervals as the horizontal line [in this example, the 99% Confidence Intervals were used]. This is the format used in the Cochrane Systematic Reviews called a forest plot and it tells us a lot. The top 5 studies are unpublished Clinical Trials of agomelatine vs placebo. The weighted average is 0.08 in favor of agomelatine [which might as well be zero], and none are significant at the 1% level [the Confidence Interval line crosses zero]. The bottom 5 studies are published studies of agomelatine. The weighted average is 0.26 with only one significant at the 1% level. Overall, the weighted average <strong><font color=\"#200020\">Standardized Mean Difference<\/font><\/strong> is 0.18. In everyday language, a <em>trivial<\/em> effect with clear <em>publication bias<\/em>.<\/p>\n<p align=\"justify\" class=\"small\">So why go through with all of this simplified statistical mumbo jumbo? It&#8217;s because these seasoned Cochrane <em>meta-analizers<\/em> take a very credible stab at translating their findings into the realm of clinical relevance in the colored paragraph above. Whether you use the <strong><font color=\"#200020\">Standardized Mean Difference<\/font><\/strong> of 0.18 evaluated by the values I quoted above, or the HAM-D <strong><font color=\"#200020\">Mean Difference<\/font><\/strong> of 1.5 HAM-D units as they did, these studies may be statistically significant at <em><strong><font color=\"#200020\">p<\/font><\/strong><\/em> &lt; 0.05 [the published ones are], but they are able to conclude that it is <u>not a clinically relevant difference<\/u>, particularly when you look at the studies <font color=\"#200020\">Servier<\/font> neglected to let us see. <strong><font color=\"#200020\">So there&#8217;s my mythical <em>connectome<\/em> between the numeric part of my brain and the part that practices clinical medicine.<\/font><\/strong> Very satisfying. <\/p>\n<p align=\"justify\" class=\"small\">So why don&#8217;t we see <strong><font color=\"#200020\">Effect Sizes<\/font><\/strong> plastered all over these Clinical Trials that have flooded our journals? They use the same data as the statistical calculations that are invariably prominently displayed. Would you publish them if your main goal was to sell agomelatine? Probably not because nobody would be very excited about either prescribing it or taking it. You&#8217;d display <strong><font color=\"#200020\">Effect Sizes<\/font><\/strong> and their <strong><font color=\"#200020\">Confidence Intervals <\/font><\/strong>if you wanted to give clinicians and patients as accurate as possible a notion of how effective the medication might be in relieving the targeted symptom &#8211; pending the later results of the reported responses in our offices where clinical medicine meets real live people in pain.<\/p>\n<div align=\"justify\" class=\"small\">I&#8217;ve way overly-simplified some of this, probably didn&#8217;t get it 100% right, and left out how you measure <strong><font color=\"#200020\">Effect Sizes<\/font><\/strong> in categorical variables [eg response vs non-response] using the Odds Ratio or the NNT [Number Needed to Treat] [mentioned in the second paragraph above]. All I wanted to do is illustrate how this group is able to go beyond the simple statistical analysis found in most of these Clinical Trials by giving us some information that might help <em>us<\/em> in the actual task at hand [<em>us<\/em> being the clinician and a help-seeking patient]&#8230;<\/div>\n","protected":false},"excerpt":{"rendered":"<p>While I haven&#8217;t thought about it very much, I made a move from the hardest of medical sciences to the softest without any transition. The first time around was in a lab with scintillation counters printing data to punch cards to feed into Fortran programs that cranked out ANOVA with p values. And then I [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[2],"tags":[],"class_list":["post-49624","post","type-post","status-publish","format-standard","hentry","category-politics"],"_links":{"self":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/posts\/49624","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/comments?post=49624"}],"version-history":[{"count":42,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/posts\/49624\/revisions"}],"predecessor-version":[{"id":49666,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/posts\/49624\/revisions\/49666"}],"wp:attachment":[{"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/media?parent=49624"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/categories?post=49624"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/1boringoldman.com\/index.php\/wp-json\/wp\/v2\/tags?post=49624"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}