Thursday, January 19, 2017

A Post-Mortem on 2016 Election Coverage

Today, Nate Silver of FiveThirtyEight published the first in what will be a series about the 2016 election. Data scientists, stats-junkies, psychologists, and haters of bad journalism rejoice - there will be a lot of analysis of the polling data and how it was (mis)used, cognitive biases, and journalistic errors in these pieces:
At this point, I don’t expect to convince anyone about the rightness or wrongness of FiveThirtyEight’s general election forecast. To some of you, a forecast that showed Trump with about a 30 percent chance of winning when the consensus view was that his chances were around 15 percent will self-evidently seem smart. To others, it will seem foolish. But for better or worse, what we’re saying here isn’t just hindsight bias. If you go back and check our coverage, you’ll see that most of these points are things that FiveThirtyEight (and sometimes also other data-friendly news sites) raised throughout the campaign.

With that in mind, here’s ground rule No. 1: These articles will focus on the general election.

Ground rule No. 2: These articles will mostly critique how conventional horse-race journalism assessed the election, although with several exceptions. The focus on conventional journalism in this article is not meant to imply that data journalists got everything right, however. There’s obviously a lot to criticize in how certain statistical models were designed, for instance.

Interestingly enough, the analytical errors made by reporters covering the campaign often mirrored those made by the modelers. I’d also argue that data journalists are increasingly making some of the same non-analytical errors as traditional journalists, such as using social media in a way that tends to suppress reasonable dissenting opinion.
I'll admit, I made some of the same mistakes to which he alludes in his articles, and I was following his model. Sometimes, it's difficult to separate the data from our opinions, especially opinions we really want to be right. This is the reason for different philosophies of science. While in a perfect world, we want researchers to study issues they have no strong opinions about, to ensure no bias, in reality, this is really difficult. People don't study things they care nothing about; doing research is hard work, and if you're not passionate about the issue, it's far too easy to throw up your hands when things get difficult. In fact, even studying something you really truly love, you'll get fatigued and will probably ending up hating it by the end - this is definitely true of thesis and dissertation topics.

In 2017, I resolve to be a better data scientist, and plan on building my skills in this vein. Stay tuned!

No comments:

Post a Comment