Thursday, April 5, 2018

E is for Effect Sizes

Title Today is the first in a three-part series this month on how to conduct meta-analysis using R, plus a fourth Statistics Sunday post tying it all together. As I mentioned in yesterday's post, we'll be using the metafor package. If you didn't install the package then, you'll want to install it now.

install.packages("metafor")
## Installing package into '\\marge/users$/slocatelli/My Documents/R/win-library/3.4'
## (as 'lib' is unspecified)
library(metafor)
## Loading required package: Matrix
## Loading 'metafor' package (version 2.0-0). For an overview 
## and introduction to the package please type: help(metafor).

For this post, we'll focus on computing effect sizes from the summary data. If this sounds like Greek to you, or if you need a refresher, you'll want to first review this post on meta-analysis and this post introducing effect sizes. But to briefly summarize, to conduct a meta-analysis, you're taking information from individual studies to convert into an effect size. You collect these from multiple studies on the same topic to try to estimate the true effect size - the value that all of these individual studies, using different methods, samples, and (yes) flaws are trying to estimate. Later on, we'll go into how you can aggregate those individual study effect sizes.

Let's assume, since all studies are looking at the same topic, that they report the same kind of summary data that we convert into an effect size. In practice, this isn't always true, which is when you have to get into converting between effect sizes. But that's an advanced topic.

Meta-analyses frequently use one of three types of effect sizes: correlation, standardized mean difference (which requires mean and standard deviations for the 2 groups or time points being compared), or odds ratio (or similar metric used for binary data). In some medical meta-analysis, you might also look at event counts over time, such as number of strokes occurring in a set period of time. If you're meta-analyzing correlations, studies often give you your effect size directly, though there are corrections you may want to apply. But if you're meta-analyzing mean differences, ratios, or counts, you have to pull in the summary statistics and convert those to your effect size.

The metafor package will do these calculations easily, using the escalc (effect size calculation) function. You'll state in the function what kind of measure you want, and this will determine what data is needed.

Standardized Mean Difference

If you're computing a standardized mean difference, abbreviated as SMD in metafor, you'll need your study data file to include: mean for each group, standard deviation for each group, and sample size per group. In grad school, I did a meta-analysis on pretrial publicity. Though most studies used guilty/not guilty verdicts as their study outcome, a handful used guilt ratings. Just for fun, I pulled some of those studies out of my study dataset. Here's the data you need to create a data frame we can analyze in metafor:

smd_meta<-data.frame(
  id = c("005","005","029","031","038","041","041","058","058","067","067"),
  study = c(1,2,3,1,1,1,2,1,2,1,2),
  author_year = c("Ruva 2007","Ruva 2007","Chrzanowski 2006","Studebaker 2000",
                  "Ruva 2008","Bradshaw 2007","Bradshaw 2007","Wilson 1998",
                  "Wilson 1998","Locatelli 2011","Locatelli 2011"),
  n1 = c(138,140,144,21,54,78,92,31,29,90,181),
  n2 = c(138,142,234,21,52,20,18,15,13,29,53),
  m1 = c(5.29,5.05,1.97,5.95,5.07,6.22,5.47,6.13,5.69,4.81,4.83),
  m2 = c(4.08,3.89,2.45,3.67,3.96,5.75,4.89,3.80,3.61,4.61,4.51),
  sd1 = c(1.65,1.50,1.08,1.02,1.65,2.53,2.31,2.51,2.51,1.20,1.19),
  sd2 = c(1.67,1.61,1.22,1.20,1.76,2.17,2.59,2.68,2.78,1.39,1.34)
)

ID is a number I assigned to keep track of every source I examined for the meta-analysis, including ones I didn't end up using. Study refers to the study number within the source, since some sources had multiple studies. I used the study number the authors gave, so I could easily refer back to the source if necessary. Not all studies qualified for the meta-analysis, so you'll notice some numbers are skipped.

You may also notice that study ID 067 is mine. Since the meta-analysis, originally done in 2009, was on the same topic as my dissertation, which I completed in 2011, I reran the meta-analysis for one of my dissertation chapters, adding in the data from my own dissertation. In the data above, the group 1s are the treatment group, who saw pretrial publicity, and group 2s are the control group, who did not. This is important to remember when it comes time to interpret the direction of the effect sizes. A positive value means the treatment group gave a higher guilt rating. While each study may use different scales for the guilt rating, since we're standardizing, these differences don't matter. The mean differences are in standard deviation units.

We now apply the escalc function for standardized mean difference, referencing the smd_meta data frame so it appends effect sizes (as a column called "yi") to the data:

smd_meta <- escalc(measure="SMD", m1i=m1, m2i=m2, sd1i=sd1, sd2i=sd2, n1i=n1, n2i=n2,
                   data=smd_meta)

Odds Ratio

Next, we can conduct the same kind of effect size calculation for odds ratio, which is what I used for most of the pretrial publicity studies. As I said, most used verdicts, so each study had a 2x2 table of results:


Once again, here's a data frame pulled from my original meta-analysis dataset:

or_meta<-data.frame(
  id = c("001","003","005","005","011","016","025","025","035","039","045","064","064"),
  study = c(1,5,1,2,1,1,1,2,1,1,1,1,2),
  author_year = c("Bruschke 1999","Finkelstein 1995","Ruva 2007","Ruva 2007",
                  "Freedman 1996","Keelen 1979","Davis 1986","Davis 1986",
                  "Padawer-Singer 1974","Eimermann 1971","Jacquin 2001",
                  "Ruva 2006","Ruva 2006"),
  tg = c(58,26,67,90,36,37,17,17,47,15,133,68,53),
  cg = c(49,39,22,50,12,33,19,17,33,11,207,29,44),
  tn = c(72,60,138,140,99,120,60,55,60,40,136,87,74),
  cn = c(62,90,138,142,54,120,52,57,60,44,228,83,73)
)

I provided guilty counts by group (tg = treatment guilty verdicts, cg = control guilty verdicts), as well as sample sizes per group, which I can use to get not guilty verdicts. We can now request odds ratios (note: metafor gives log odds ratios) with the escalc function:

or_meta <- escalc(measure="OR", ai=tg, bi=(tn-tg), ci=cg, di=(cn-cg), data=or_meta)

But I could request other types of binary effect sizes, such as risk ratios (once again, log-transformed automatically), or risk difference. The metafor package gives more information on the different measures you can request, how they're interpreted, and some sample datasets using the different measures.

In part 2, we'll talk about variances, and in part 3, weights, with a Statistics Sunday post talking about using the results from escalc to generate an aggregate effect size, one of the main goals of meta-analysis. Check back later this month for those posts! And tomorrow, check back for a post on conducting confirmatory factor analysis in R, once again using the Facebook dataset.

No comments:

Post a Comment