One place is when extracting that information from the studies themselves, which is why having more than one person extract information and calculating some sort of interrater reliability is a good idea. Another place is at data entry. You should always double-check those numbers, either with a visual examination of the dataset or, my preference especially for large meta-analyses, through data visualization, as I did yesterday with ggplot. Values that don't show an inverse relationship between sample size and variance should be checked. This was the case with study 045 in the or_meta file. A visual check, and knowledge of the formula for variance, confirmed that this variance was correct; it was large because there were a lot more people in the control group than treatment group.
Once we're satisfied that our numbers are correct, we can move forward with the meta-analysis. Part of that calculation involves study weights. Weights and variance have an inverse relationship; in fact, one value can be used to compute the other:
wi = 1/vi
vi = 1/wi
When you take the inverse of a small number, such as variance, you get a large number, weight. And when you take the inverse of a large number, you get a small number. metafor calculates those weights for you once you conduct the actual meta-analysis, using the rma function. So let's do that. First, I'll load my data frames, and calculate effect sizes and variances.
smd_meta<-data.frame( id = c("005","005","029","031","038","041","041","058","058","067","067"), study = c(1,2,3,1,1,1,2,1,2,1,2), author_year = c("Ruva 2007","Ruva 2007","Chrzanowski 2006","Studebaker 2000", "Ruva 2008","Bradshaw 2007","Bradshaw 2007","Wilson 1998", "Wilson 1998","Locatelli 2011","Locatelli 2011"), n1 = c(138,140,144,21,54,78,92,31,29,90,181), n2 = c(138,142,234,21,52,20,18,15,13,29,53), m1 = c(5.29,5.05,1.97,5.95,5.07,6.22,5.47,6.13,5.69,4.81,4.83), m2 = c(4.08,3.89,2.45,3.67,3.96,5.75,4.89,3.80,3.61,4.61,4.51), sd1 = c(1.65,1.50,1.08,1.02,1.65,2.53,2.31,2.51,2.51,1.20,1.19), sd2 = c(1.67,1.61,1.22,1.20,1.76,2.17,2.59,2.68,2.78,1.39,1.34) ) or_meta<-data.frame( id = c("001","003","005","005","011","016","025","025","035","039","045","064","064"), study = c(1,5,1,2,1,1,1,2,1,1,1,1,2), author_year = c("Bruschke 1999","Finkelstein 1995","Ruva 2007","Ruva 2007", "Freedman 1996","Keelen 1979","Davis 1986","Davis 1986", "Padawer-Singer 1974","Eimermann 1971","Jacquin 2001", "Ruva 2006","Ruva 2006"), tg = c(58,26,67,90,36,37,17,17,47,15,133,68,53), cg = c(49,39,22,50,12,33,19,17,33,11,207,29,44), tn = c(72,60,138,140,99,120,60,55,60,40,136,87,74), cn = c(62,90,138,142,54,120,52,57,60,44,228,83,73) ) library(metafor)
smd_meta <- escalc(measure="SMD", m1i=m1, m2i=m2, sd1i=sd1, sd2i=sd2, n1i=n1, n2i=n2,data=smd_meta) or_meta <- escalc(measure="OR", ai=tg, bi=(tn-tg), ci=cg, di=(cn-cg), data=or_meta)
These two datasets use different effect size metrics, so I can't combine them unless I convert the effect sizes from one dataset. For now, we'll meta-analyze them separately. First up will be the dataset using standardized mean difference (Cohen's d).
There are two overall methods of meta-analysis - fixed effects and random effects. Fixed effects means that you suspect there is only one underlying effect size that your studies are trying to estimate. You can access this by setting method="FE" (for fixed effects) in the rma function.
smd.rma<-rma(yi,vi,method="FE",data=smd_meta) summary(smd.rma)
## ## Fixed-Effects Model (k = 11) ## ## logLik deviance AIC BIC AICc ## -41.4601 97.3274 84.9202 85.3181 85.3647 ## ## Test for Heterogeneity: ## Q(df = 10) = 97.3274, p-val < .0001 ## ## Model Results: ## ## estimate se zval pval ci.lb ci.ub ## 0.3492 0.0526 6.6388 <.0001 0.2461 0.4523 *** ## ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
I'll go into the various parts of this output on Sunday. For now, we'll look at the Model Results, which includes the estimate (the overall effect size).
smd.rma$beta
## [,1] ## intrcpt 0.3491923
The overall Cohen's d for this set of studies is 0.349, a moderate effect. In plain English, the average guilt rating among people who saw pretrial publicity is about 0.349 standard deviations higher than people in the control group. There's an associated Z-value and p-value, which tests whether that metric is significantly different from 0.
options(scipen=999) smd.rma$zval
## [1] 6.638784
smd.rma$pval
## [1] 0.00000000003162821
The Z-test confirms that this value is significantly different from 0. I'll go into conducting a random effects model (and how to tell if you should) on Sunday. Now, let's run the meta-analysis for the log odds ratio dataset.
or.rma<-rma(yi,vi,method="FE",data=or_meta) summary(or.rma)
## ## Fixed-Effects Model (k = 13) ## ## logLik deviance AIC BIC AICc ## -23.0842 47.8107 48.1684 48.7334 48.5321 ## ## Test for Heterogeneity: ## Q(df = 12) = 47.8107, p-val < .0001 ## ## Model Results: ## ## estimate se zval pval ci.lb ci.ub ## 0.7456 0.0987 7.5566 <.0001 0.5522 0.9389 *** ## ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
or.rma$beta
## [,1] ## intrcpt 0.7455659
or.rma$zval
## [1] 7.556647
or.rma$pval
## [1] 0.00000000000004135935
Our estimated log odds ratio for this dataset is 0.7456, which is significantly different from 0. This number is difficult to interpret in isolation, so we want to convert it back to odds ratio, which is easier to interpret.
exp(or.rma$beta)
## [,1] ## intrcpt 2.107634
Overall, people who saw pretrial publicity were over 2 times as likely to convict as people in the control group.
metafor will also let you supply user-defined weights, using the weights argument in the rma function. If you're applying a correction to your effect sizes - such as converting your Pearson's r coefficients to Fisher's Z - you'll probably want to supply custom weights, because the formula is different. If you leave that argument out, as I did, rma will create weights as the inverse of variance by default.
I should note that I only picked a handful of studies to include for this example, so these don't match the results I found in my actual meta-analysis, which included many more studies. (If you're conducting research on pretrial publicity and would like the actual results of my meta-analysis, please contact me. You can find contact info on my about page. I'm happy to share my actual results. These numbers are for demonstration purposes only. I still hope to publish this work someday, which is why I'm not using the full dataset or actual results here, but at this point, I need to collect more data on the new pretrial publicity studies that have been conducted since 2011.)
More on meta-analysis Sunday! And tune in tomorrow for the letter X!
No comments:
Post a Comment