Front page headlines last week proclaimed the safe return of anti-depressants to your child’s medicine chest. The cause for celebration was the publication of an article in last week’s Journal of the American Medical Association (JAMA).
The JAMA article was a study of clinical trials of anti-depressants in what is called a meta-analysis—essentially a study of studies. The results of clinical trials and in fact most studies of health effects are based on statistical inference. A meta-analysis can be valuable in finding information by examining study results grouped together, information that could not be found in an individual study.
So the JAMA researchers set out to perform a meta-analysis and see whether anti-depressants increase the risk of suicide in children. The FDA, after much foot-dragging, had put a so-called black label on anti-depressants because of studies showing a suicide risk existed among children taking these medications. Not to worry, the researchers tell us. Having looked at quite a collection of clinical trials, they teased out the truth and the truth is that anti-depressants don’t increase the risk of suicide. They can prove it. They’ve got the numbers right there in the JAMA article.
What a relief.
Except for one small thing. Earlier this month, the Canadian Medical Association Journal carried an article about the misuse of meta-analysis and what’s called publication bias. When you perform a meta-analysis, you have to select which studies to include and which to exclude—lots of opportunity there for cooking the statistical books. But that’s not publication bias. Even if you select studies in a squeaky clean way, you can only select from published research. Although not true of all research, there is a well-documented tendency for a kind of herd mentality to set in that skews published research results.
Luckily, there are statistical methods to detect publication bias. The authors of the Canadian journal article looked at a wide range of meta-analyses and found that in most cases the test either wasn’t done, wasn’t done correctly, or the researchers simply didn’t know how to interpret the test’s results.
The JAMA authors state that they did such tests. They assure us that everything was just fine. Give us a call if you want the details.
I was suspicious.
Then I recalled another study from several years ago that showed that research conducted or funded by the pharmaceutical industry is systematically biased. The same researchers more recently published a study about how commercial interests, including pharmaceutical companies, influence what gets published in medical journals. It’s been quite an embarrassment—so much so that journals have become very keen on researchers disclosing their affiliations and funding sources.
No doubt as a consequence, the JAMA researchers listed the funding sources for each of the studies they included in their meta-analysis. Out of the 27 studies examined, 20 were entirely funded by a pharmaceutical company, 2 more were partially funded by a pharmaceutical company, 3 more had some pharmaceutical company involvement, leaving a grand total of 2 studies with no pharmaceutical company participation.
This is embarrassingly bad science. There is absolutely no reason anyone should take the JAMA article’s conclusions seriously. JAMA should be ashamed for making the big deal it did of this study and newspapers should be even more ashamed for reporting the results as though it were God’s honest truth.
They’re trying to destroy our children’s souls. Stop it.