"Bring us their heads!" cried the mob as soon as the outcome of the general election was finally known. "Their heads!"
Whose heads? Are people baying for the blood of Hillary Clinton's campaign managers? Her consultants? The pundits, maybe? Wait, the media, right? Nope, they all get a free pass. But let's excoriate the pollsters, won't we? They're the ones who got it wrong. Right?
It's a brilliant job doing what I do, often high-profile, always interesting, but it can be a thankless one sometimes. Pollsters everywhere are in the firing line in the wake of last Tuesday, and I wanted to step up to provide a defense of my business.
The truth is, all polls are essentially inaccurate to some degree. If, for example, you randomly select 1,000 items from a population, you'll get results with a maximum margin of error of plus or minus 3 percent at 95 percent confidence.
Here's an exercise for you to try at home to illustrate exactly what that means. Get a coin and flip it 1,000 times. No, seriously. What percent heads did you get? I just did it and got 47.9 percent. Well, I didn't actually flip a coin; I made an Excel spreadsheet do essentially the same thing and it came up heads 479 times. Did it again, 523. Again, 504. Nineteen times out of 20 (95 percent) you'll get between 47 percent and 53 percent heads. It's error that's no one's fault. It just happens, randomly. The reality is, if you get it bang on 50 percent, you got lucky.
[AK Beat: Is Alaska really that challenging for pollsters?]
So the first thing to acknowledge is that the polls weren't all that wrong. The final Real Clear Politics average for the Trump-Clinton race was 3.2 points to Clinton. As I write this, the popular vote stands at about a half point lead for Clinton, a figure that will probably rise to nearer one point as the last votes are counted. So right now, the average of the polls was 2.7 points off.
But I'm pretty sure random error didn't cause this discrepancy. Aggregation of lots of polls (that websites like RCP do) tends to cancel out all the random errors anyway, leaving more regular patterns to show themselves. For example, in the final few days of the election, CBS News had the race at plus 4 to Clinton, Fox News had plus 4, ABC/Washington Post had plus 3, The Economist had plus 4 and Bloomberg had plus 3. Were they all miraculously suffering the same type of error? Well, as it happens, they were but it was of a more systematic variety.
Imagine that in reality you've got exactly equal numbers of Trump supporters and Clinton supporters in the population. Now let's suppose that the enthusiasm of the Trumpets is a little bit more than the Clintonistas, let's say 95 percent enthusiasm versus 90 percent, not hard to imagine if you checked out any of The Donald's rallies. If that "enthusiasm gap" translates into an equivalent difference in voter turnout, all of a sudden you haven't got a tie anymore; you've got, coincidentally enough, a 2.7 percent Trump lead.
Are you going to be able to detect this difference in enthusiasm in your likely voter screening questions? Do you think a 90 percent Clinton enthusiast will give any less indication of their voting likelihood than a 95 percent Trump supporter? Both of them are highly enthusiastic. Both of them are going to say they'll vote. Even using more creative ways of assessing voter intent, like asking people where their polling place is, is well-informed guesswork at best.
And this 2.7 percent shift in the vote, caused by our unmeasurable difference in turnout, would have been enough of a shift to give Trump Florida, Michigan, Pennsylvania and Wisconsin when he wouldn't have had them otherwise, along with their 75 electoral votes, completely turning the result of the election on its head. Which is what happened. The Electoral College can be a mighty sensitive device.
Before Tuesday, I wondered if there was anything to the "Clinton enthusiasm gap" that would lead to her turnout being depressed, or whether Trump's voters, drawn as they are from a less traditional, possibly less reliable base, would be the ones who'd be less likely to get up out of the couch and get on down to the polling station to vote. As it happened, it was Trump's wave of people who were more fired up.
[A new 50-state poll shows exactly why Clinton has the advantage over Trump]
But short of having a crystal ball, there's no way to quantify this meaningfully and accurately — to predict human behavior that hasn't happened yet. That's the reality. The most difficult bit of human behavior pollsters have to try to foresee isn't so much who people are going to vote for; that's the easy part. It's whether they're going to vote at all.
And possibilities for error don't end there. If only random error and predicting voter turnout was all we had to deal with. We also have to figure out ways to draw good random samples, we have to deal with non-response bias because people are less likely to want to do surveys than they used to be, and of course we have to ask good questions and ask them well to get good data.
So, please, do us a favor and give pollsters a break. Yes, we seemingly always call right as you're sitting down for dinner; for a few years now we've been calling you on your cellphones too, and sometimes we don't come up with results that you agree with. But politics isn't all we do. We serve a pretty important function facilitating communication at the intersection of private industry, government, nonprofits, advocacy groups and the public. Finding out what people want, what they think and what they do. And, believe it or not, we all do our best to do good, accurate work.
Just don't ask us to predict the future, OK?
Ivan Moore is a longtime Alaska pollster and the owner of Alaska Survey Research in Anchorage.
The views expressed here are the writer's and are not necessarily endorsed by Alaska Dispatch News, which welcomes a broad range of viewpoints. To submit a piece for consideration, email to commentary@alaskadispatch.com. Send submissions shorter than 200 words to letters@alaskadispatch.com.