Jerzy Neyman, photographed while he was at UC-Berkley. By Konrad Jacobs, Erlangen, Copyright is MFO - Mathematisches Forschungsinstitut Oberwolfach, http://owpdb.mfo.de/detail?photo_id=3044, CC BY-SA 2.0 de, Link |
Why? Because the standard deviation tells us the typical spread of scores (the individual units that make up the mean) but that doesn't tell us the typical (expected) spread of means. Confidence intervals allow you to do that, not just for means but for a variety of aggregate statistics.
But I'm perhaps getting ahead of myself. I dug into Neyman's paper to try to summarize it for you. I've only read Neyman's work summarized in the past. In Fisher, Neyman, and the Creation of Classical Statistics, I've read some of his early correspondence with E. Pearson, when Neyman was still learning English. So I wasn't completely sure what to expect when I read his confidence interval article from 1937. I highly recommend reading it, as Neyman excellently summarizes complicated mathematical concepts in plain language. His work is highly approachable and he uses lots of examples to help drive home his points.
Neyman argued that, unless we have access to the full population we are studying, and are capable of measuring each individual in that population, there will be probabilities associated with our work; both the estimation process and the estimate itself should be expressed in probability. In fact, his classical approach to statistics includes probability in the estimation process, through the use of significance testing. He acknowledges that there are many different approaches to estimating values, and that while some are more right than others, none are likely to get you the exact population value, θ. They will all be estimates within a certain margin of error. Confidence intervals communicate that margin of error.
That is, he essentially says there is disagreement on the process of estimating population values from samples, and disagreement on the use of different estimation techniques (such as maximum likelihood, developed by R.A. Fisher). Though some approaches may be superior - and some of Neyman's footnotes feel very directed at Fisher - we are still trying to estimate an unknown parameter, so there is really no way to prove one is superior. But we can perhaps identify an interval surrounding the true population value.
That interval - the confidence interval - will be based on probability - the confidence coefficient - which is greater than 0 and less 1. The usual convention is 0.95 (95%), though he uses a variety of confidence coefficients in his paper and doesn't really settle on one as the gold standard. The 95% convention came later, probably because of it's connection to the probability we use in significance testing, where the convention for alpha is 0.05.
Without knowing the precise shape of the distribution of population values, we would instead use values with "intuitive" (his word) appeal, such as, for instance, the normal distribution (either the standard normal distribution or the t-distribution). He offers a variety of equations for different scenarios, but this one - the one that works with the normal distribution, which came at the end of the paper - is probably the approach most statistics students are familiar with. That is, we can use the values associated with different proportions of the curve around the mean to generate our confidence interval. We use these values from the z or t distribution (Neyman recommends t) as the multiple for the standard deviation. The exact procedure for confidence intervals varies depending on what type of estimate you're working with. For instance, some confidence intervals use standard error instead. But the basic procedure of choosing a probability and combining it with results of your analysis and values from a known distribution remains.
As the size of the sample used to estimate the population value increases, the bias (difference between the estimate and actual value) reduces toward 0. So estimates based on larger samples are more likely to be close to the true population value, and confidence intervals generated from that estimate will be narrower while still being likely to contain the actual value.
How likely? We don't actually know. As Neyman points out continuously in his paper, the probability that a range actually contains the population value is 0 or 1. There is no in between when it comes to real probability; it's either there or it isn't. But when we generate a confidence interval around our estimate, we don't know if it truly contains the actual value or not. So we draw upon the law of large numbers, that over time, with repeated estimates of the population value, we'll have the real population value in our ranges a certain proportion of the time, with that proportion equal to the confidence coefficient we choose.
Say we always use 95% confidence intervals in a certain area of study. (That's the convention, anyway.) With repeated research (conducted in an unbiased way), we'll have the real population value in our range much of the time, with the actual percentage of the time approaching 95% as the number of studies approaches infinity. As has been shown repeatedly, chance is lumpy, and a 95% change of something doesn't mean it will happen exactly 95% of the time, just like you won't have a perfect 50% heads in your coin flips.
A glance at Neyman's reference section shows many of the greats of statistics: Fisher, Hotelling, Kolmogorov, Lévy, Markoff... Many names you'll hear again and again in these GMIS posts.
No comments:
Post a Comment