I've blogged before about meta-analysis (some examples here, here, and here), but haven't really gone into detail about what exactly it is. It actually straddles the line between method and analysis. Meta-analysis is a set of procedures and analyses that allow you to take multiple studies on the same topic, and aggregate their results (using different statistical techniques to combine results).
Meta-analysis draws upon many of the different concepts I've covered so far this month. Aggregating across studies increases your sample size, maximizing power and providing a better estimate of the true effect (or set of effects). It's an incredibly time-intensive process, but it is incredibly rewarding and the results are very valuable for helping to understand (and come to a consensus on) an area of research and guide future research on the topic.
First of all, you gather every study you can find on a topic, including studies you ultimately might not include. And when I say every study, I mean every study. Not just journal articles but conference presentations, doctoral dissertations, unpublished studies, etc. Some of it you can find in article databases, but some of it you have to find by reaching out to people who are knowledgeable about an area or who have research published on that topic. You'd be surprised how many of them have another study on a topic they've been unable to publish (what we call the "file drawer problem" and relatedly, "publication bias"). The search then weeding through is a pretty intensive process. It helps to have a really clear idea of what you're looking for, and what aspects of a study might result in it being dropped from the meta-analysis.
Next, you would "code" the studies on different characteristics you think might be important. That is, even if you have very narrow criteria for including a study in your meta-analysis, there are going to be differences in how the study was conducted. Maybe the intervention used was slightly different across studies. Maybe the samples were drawn from college freshmen for some studies and community-dwelling adults in others. You decide which of these characteristics are important to examine, then create a coding scheme to pull that information from the articles. To make sure your coding scheme is clear, you'd want to have another person code independently with the same scheme and see if you get the same results. (Yes, this is one of the times I used Cohen's kappa in my research.)
You would use the results of the study (the means/standard deviation, statistical analyses, etc.) to generate an effect size (or effect sizes) for the study. I'll talk more about this later, but basically an effect size allows you to take the results of the study and convert it to a standard metric. Even if the different studies you included in the meta-analysis examined the data in different ways, you can find a common metric so you can compare across studies. At this point, you might average these effect sizes together (using a weighted average - so studies with more people have more impact on the average than studies with fewer people), or you might use some of the characteristics you coded for to see if they have any impact on the effect size.
This is just an overview, of course. I could probably teach a full semester course on meta-analysis. (In fact, that's something I would love to do, since meta-analysis is one of my areas of expertise.) They're a lot of work, but also lots of fun: you get to read and code studies (don't ask me why but this is something I really enjoy doing), and you end up with tons of data to analyze (ditto). If you're interested in learning more about meta-analysis, I recommend starting with this incredible book:
It's a really straightforward, step-by-step approach to conducting a meta-analysis (giving attention to the statistical aspect but mostly focusing on the methods). For a more thorough introduction to the different statistical analyses you can conduct for meta-analysis, I highly recommend the work of Michael Borenstein.