This election cycle, I followed election forecasts
pretty closely. Since everything I was reading had Clinton defeating Trump, I, like many others, thought the election was already decided. And I don't have to tell you that, I, like many others, was completely wrong.
Yesterday, a fellow psychologist I went to college with asked for some thoughts on what happened with the polls, and why they were so wrong. I offered a few thoughts, but did some more thinking about it. So what I'm writing today is a mixture of what I shared then, and what I've added since.
First of all, the election forecast I followed most closely was FiveThirtyEight's
. And in his posts, Nate Silver pointed out that the difference in proportions voting for Clinton versus Trump was approximately equal to typical polling error (how much difference we usually see between polling data and the actual results). To explain, when we poll people, we randomly select a sample, and often will also do some stratification to make sure that the sample we get represents the population in terms of key demographic characteristics (things like race, ethnicity, or age group). (For polls where representing the population is very important, such as this one, that definitely occurred.) We do this to try to ensure that, because the people we sample are similar to the population in these characteristics, they're also (hopefully) similar to the population in terms of the thing we are measuring. But we have no way of knowing if that's true, and it is always possible that we'll end up with a sample that is biased just by random chance. Just as you could flip a coin and end up with 5 heads in a row - it doesn't represent the most probable distribution of coin flips, but it can happen just by chance. So you could have all the right controls in place and still end up with a sample that doesn't represent the population. Sadly, that's how probability works.
Another possibility has to do with the tenuous connection between behaviors and behavioral intentions. A behavioral intention is what you plan to do, and the behavior is what you actually do. Though there's a correlation between the two, it isn't one-to-one. Anyone who has intended to go to the gym one day, only to end up binge-watching Game of Thrones
instead, knows all too well about the imperfect connection between intentions and behaviors. You may have even convinced yourself that you'll be going to the gym, right up to the point that you realize it's too late for that. The problem is, when we do opinion polling, we are measuring intention, rather than behavior. We're asking people how they will
vote. They may intend to vote one way, but can (and do) change their mind right up to the moment they actually carry out the behavior. It's possible that a proportion of respondents had a mismatch between their intention (vote for Clinton) and the actual behavior (vote for Trump). This usually happens when the intention wasn't that strong to begin with, which isn't fully captured in forced choice polling.
So to recap, the problem could be sampling error, or people changed their minds. These are issues we have to deal with all the time in research. The next two possibilities I have are also likely, and occur when the poll itself
influences how people respond.
First of all, people who respond to surveys and polls tend to be different than people who do not. This is why we include things like incentives, reminders, and so on, to try to maximize the chance that people who don't usually respond to these sorts of things will decide to respond to this particular one. But when the people who respond are different from the people who don't, we call that selection bias. The conservative party, and Trump himself, have both offered many criticisms of the "liberal media."
Some of the big polls were performed by or in collaboration with large media groups, and other polls, people may have perceived as being performed by "the media." If a particular group is already being programed to distrust a particular group, they'll probably be much less likely to respond when that particular group invites them to participate in a poll. People who self-selected to participate may have been people who didn't perceive the polling organization to be part of the "liberal media" or who have trust in those organizations. So the nature of the poll itself could have influenced who responded.
Finally, we have the possibility that people did not respond the way they actually feel to the poll. This usually happens because of the influence of social desirability. People want to be liked, and they want to answer a question the way they feel the interviewer wants the question to be answered. Maybe the respondents who really did want to vote for Trump were embarrassed and didn't want to admit that to the interviewer, especially if they think the interviewer is more likely to be liberal (see just above).
People often hold public attitudes that differ very much from their private attitudes. This occurs when they think their private attitude is undesirable and differs from the majority (a concept known as pluralistic ignorance), so they insist they believe the opposite to feel a sense of belonging in the group. In fact, they may become very vocal about their public attitude, to overcompensate and try to prevent people from figuring that their private attitude is completely different.
|Need I say more?|
What this means is, people may have said they were voting for Clinton publicly, but knew they would be voting for Trump privately. And this effect may have been stronger in this particular election. Every election cycle, there will be people who strongly criticize one of the candidates. But it felt like many people and organizations were joining in, more than usual. I remember at one point looking at all the negative attention Trump was getting, and worrying that it could actually help him in some way, making him more like the underdog (and everyone loves an underdog). But it didn't occur to me that the more likely effect was that the negative attention might cause people to mask their true feelings about him, leading to inaccurate polls.
There are ways to minimize socially desirable responding, through question wording and/or reminding participants that their answers will be kept confidential. But there's always a possibility to have socially desirable responding, despite the controls.
Obviously, the items on this list are not mutually exclusive. It could have been a combination of any or all of the above.