Tuesday, October 4, 2016

The Five-Year Legal Battle Against Bad Science

Chronic fatigue syndrome (CFS), also known as myalgic encephalomyelitis (ME), is a neuroimmune disease that affects 1-2.5 million Americans (17 million worldwide). The symptoms are widespread, ranging from memory issues and poor sleep to pain and swollen lymph nodes. The main symptom, of course, is severe fatigue that results from any kind of exertion, which is caused by an abnormal immune response to exertion that makes it difficult for people with CFS/ME to recover. Though the cause of CFS/ME is unknown, some research suggests it can occur after a bacterial or viral infection.

The best treatment for CFS/ME? According to one study (the so-called PACE trial) published in the prestigious Lancet medical journal, it's cognitive behavioral therapy and exercise. But wait, wouldn't exercise be a really bad idea for people whose immune systems freak out at any kind of exertion (leaving some sufferers bedbound)? That's what a lot of people with CFS/ME said after the article came out and especially after the article influenced treatment recommendations from such places as the Centers for Disease Control and Prevention, Mayo Clinic, and Kaiser. And it turns out, those skeptical patients were right:
If your doctor diagnoses you with chronic fatigue syndrome, you’ll probably get two pieces of advice: Go to a psychotherapist and get some exercise. Your doctor might tell you that either of those treatments will give you a 60 percent chance of getting better and a 20 percent chance of recovering outright. After all, that’s what researchers concluded in a 2011 study published in the prestigious medical journal the Lancet, along with later analyses.

Problem is, the study was bad science. And we’re now finding out exactly how bad.

Under court order, the study’s authors for the first time released their raw data earlier this month. Patients and independent scientists collaborated to analyze it and posted their findings Wednesday on Virology Blog, a site hosted by Columbia microbiology professor Vincent Racaniello. The analysis shows that if you’re already getting standard medical care, your chances of being helped by the treatments are, at best, 10 percent. And your chances of recovery? Nearly nil.
In fact, that 10 percent number is based on a reanalysis by the original authors. The analysis by independent scientists found far worse results: 4.4 percent of exercise patients and 6.8 percent of cognitive therapy patients met the criteria for "recovered," compared to 3.1 percent of people who received neither treatment. None of these differences were statistically significant.

The issues with the study are widespread, ranging from lack of proper blinding, shifting definitions of recovery and improvement between the original protocol and the final analysis, and potentially invalid thresholds for physical functioning. Some critics even suggest the the inclusion criteria are so poorly written, there may be participants in the study who don't even have CFS/ME. As Jonathan Edwards, a professor emeritus of medicine quoted in the article, put it, "They’ve set this trial up to give the strongest possible chance of there being a placebo effect that you can imagine."

And yet, this article passed a rigorous peer review process and was published, in one of the top medical journals. The question that Lancet should be asking at the moment is "How?" Once they figure that out, they need to fix whatever problem they uncover with their system, because this seriously damages their credibility.

This is an interesting counterpoint to the arguments around Susan Fiske's attack on "methodological terrorists," which some (but not all) perceived as being an attack on anyone who dares to criticize published research. In fact, the researchers in the PACE trial claimed they had received death threats - claims that appear to have been false - and the naysayers were referred to as a "vocal minority." However, I see no issue with what the lawsuit set out to do: get the researchers to release their deidentified raw data so that independent statisticians could reanalyze them. This is what good science is all about - replicability, not only in replicating a study but replicating results from the same dataset. And when the analysis approached might be invalid, this includes using different approaches, to see how sensitive the results are to, say, the cut-offs the researchers adopted. (In fact, we refer to this as "sensitivity analysis" - does a different approach make a difference in the results?) Julie Rehmeyer, the author of the article linked above, agrees:
Watching the PACE trial saga has left me both more wary of science and more in love with it. Its misuse has inflicted damage on millions of ME/CFS patients around the world, by promoting ineffectual and possibly harmful treatments and by feeding the idea that the illness is largely psychological. At the same time, science has been the essential tool to repair the problem.

No comments:

Post a Comment