Saturday, October 22, 2016

How Statisticians Solve Disagreements

I'm currently taking an online course on meta-analysis, which is a set of statistical and methodological techniques that allow you to combine multiple studies on a topic and generate an estimate (or set of estimates) about the true effect. It's almost like crowd-sourcing data - you're taking advantage of all the work others have done and capitalizing on the strength of having an increased number of participants, difference treatment methods, and so on. I did a candidacy exam in grad school on meta-analysis, and have conducted one before (on pretrial publicity effects), so I know a bit about the topic. This course is devoted to using the R Statistical Package, an open-source program with powerful analysis and graphing capabilities, to conduct a meta-analysis.

For the first week, we were assigned to read up on the R packages we'll be using, as well as an article from the creator of meta-analysis, Gene Glass. I've read some of Glass's work before, but for some reason, didn't encounter this article until now, which tells the reason meta-analysis was created. In addition to wanting to contribute to the field, and have a good topic to introduce in his Presidential Address to the American Educational Research Association, it was really developed to solve a disagreement.

Glass, like many grad students, left grad school with a brand new PhD and a case of depression. He found his way into psychotherapy and was so pleased with his progress, he began studying clinical psychology and became psychotherapy's biggest fan. However, another researcher, Hans Eysenck, became psychotherapy's biggest critic, constantly arguing that any effects were merely placebo:
I found this conclusion personally threatening—it called into question not only the preoccupation of about a decade of my life but my scholarly judgment (and the wisdom of having dropped a fair chunk of change) as well. I read Eysenck's literature reviews and was impressed primarily with their arbitrariness, idiosyncrasy and high-handed dismissiveness. I wanted to take on Eysenck and show that he was wrong: psychotherapy does change lives and make them better.
Glass goes through the decisions Eysenck made in conducting his literature review on the subject, and it's easy to see why, based on these decisions, Eysenck concluded psychotherapy was ineffective - or rather, it easy to see that because Eysenck strongly believed going in that psychotherapy was ineffective, he looked for evidence that supported and ignored evidence that refuted his conclusion. First, he refused to include any research that was not published in a peer reviewed journal, even studies that have to undergo another form of peer review, such as dissertations, theses, or conference presentations. But there is much reason to believe that peer reviewed articles could be biased.

Next, he eliminated any study that didn't have a control group (a group that received no treatment). So if a study compared two forms of therapy, it was tossed out. This left only 11 studies. He then did a vote count, which involves tallying up the number of studies finding a significant difference and the number finding no significant difference. "All that Eysenck considered worth noting about an experiment was whether the differences reached significance at the .05 level. If it reached significance at only the .07 level, Eysenck classified it as showing 'no effect for psychotherapy.'"

And finally, here's the real gem: if he didn't like the outcome they used (that is, he considered it subjective), he discounted the finding, and if a study found differences for one outcome but not for a second one, he also discounted it, calling it "inconsistent." This was the case even if one of the outcomes was something that might be only show a small change due to therapy, such as GPA, versus an outcome that would show a big difference, such as a measure of symptom severity. Eysenck's review didn't even take into account effect sizes: what outcomes would show big differences after psychotherapy and what would show small difference.

And that's where meta-analysis comes in:
Looking back on it, I can almost credit Eysenck with the invention of meta-analysis by anti-thesis. By doing everything in the opposite way that he did, one would have been led straight to meta-analysis. Adopt an a posteriori attitude toward including studies in a synthesis, replace statistical significance by measures of strength of relationship or effect, and view the entire task of integration as a problem in data analysis where "studies" are quantified and the resulting data-base subjected to statistical analysis, and meta-analysis assumes its first formulation. (Thank you, Professor Eysenck.)
So the TL;DR is, how to statisticians solve disagreements? They create new statistics, and then publish pithy articles where they thank the person they disagreed with. Love. It.

No comments:

Post a Comment