Survey Says ...
October 2016. No matter where you stood on the political spectrum, who you were planning on voting for, things were, well, interesting. If you were a Donald Trump supporter, the news was that your candidate didn’t have a chance against a politically experienced opponent. If, on the other hand, you were a Hillary Clinton supporter, while your candidate was expected to win, the race was a little too close for comfort.
In the end, though, on November 8th, things went topsy turvy. The outsider with no political experience whatsoever triumphed over one of the most politically savvy people ever to run. For some it was a day of great rejoicing and for others a day of sadness and despair. Both feelings, however, coming from a place of unexpected outcomes.
The question is, why were we taken by such surprise? Why did we not see this coming?
The easy answer can be summed up in one word: Polling. The explanation for that answer, though, is a bit more complicated. To start with, by definition, a “poll,” is a based on opinion only and is not a scientific study no matter how you dress it up. Conducting a survey to ask people about the toppings they prefer on a pizza won’t tell you any more (or less) what any individual will order next time they step up to the counter than not having the survey. At best, it might help you make sure you have enough pepperoni, mushroom, or pineapple on hand but that’s about it.
As Martha Gill pointed out in a recent Guardian article, “using polls to work out which party is ahead in a neck-in-neck election race is like using Google Earth to measure your food portions, or Boris Johnson to run your Foreign Office: they’re simply the wrong tool for the job.” This is because, at a minimum, there’s a margin of error factor which, even when it’s as little as 2%, is huge when it comes to actually using the poll to make a prediction. She goes on to say that polls might be good at registering a national mood about something, but even that can be problematic.
For example, pollsters can design a poll to get the results expected. And they can do this in a number of ways. The easiest, of course, is to only ask people who think like you do. Or you can ask questions in such a way as to either be confusing or misleading or display a particular bias. Almost a month after his inauguration, President Trump’s team sent out a poll to people from his mailing list (biased respondents). The questions, taken from Danielle Kurtzleben’s February 17, 2017 NPR article because the original site is no longer available, asked questions like “Do you believe that the mainstream media does not do their due diligence fact-checking before publishing stories on the Trump administration?” This is a leading question, implying the correct response is agreement. They also asked “Were you aware that a poll was released revealing that a majority of Americans actually supported President Trump’s temporary restriction executive order?” Of course, while true, this only gives the respondents limited information since there were considerably more surveys published which concluded just the opposite. This is your basic “4 out of 5 dentists surveyed” scenario – survey 100, but only give the public the 4 who agree with your position.
So why do it? Why create a poll so obviously flawed? Kurtzleben points out that just because the poll itself is flawed, doesn’t mean it’s useless. Biased questions can plant thoughts into the respondents heads, changing an existing narrative and causing them to think about things in a different way. Or the makers of the survey could be looking specifically at the limited respondent pool in order to find what a candidate’s followers already believe and thereby adjust their message accordingly or turn into the curve and avoid a crash if it’s not what they expect to find.
So then, what’s the point of polling in general? Honestly? Not much. In fact, the House of Lords in the UK is thinking of implementing a ban. The Guardian articles explains that the “committee on polling and digital media has called for the polling industry to ‘get its house in order’ or else the case for banning polling in the run-up to elections ‘will become stronger.’” But then again, they also point out that it really doesn’t matter.
Will Jennings & Christopher Wlezien looked at polling data since 1942 in an article for Nature Magazine this past March and came to the conclusion that nothing has changed. It’s always been faulty and doesn’t provide any real insight into how people will actually vote. It seems it’s primarily there for the news media, to give them something to report. If they don’t have the latest poll numbers, they’ll look at twitter followers, or campaign bumper stickers or maybe even go back to the old ways and cast bones or read tea leaves.
No matter what, people are going to vote the way they vote, and we’re never, especially in close races, going to be able to predict it with any certainty.