What Went Wrong With The Polls In The British Election?

Pollsters on both sides of the Atlantic have been trying to figure out why the polls released right up until the eve of the British General Election were so wrong. Here's one theory, and it's very compelling.

polling-stick-figures-1

In the week since the British General Elections returned David Cameron to No. 10 Downing Street thanks to a much stronger than anticipated Conservative Party victory, pollsters and political analysts have been trying to figure out what happened. As you will recall, the polling in the weeks leading up to the election, and indeed on the very eve of the vote itself, showed that the battle between Labour and the Tories than what we actually ended up with. On both sides of the Atlantic, pollsters and analysts have been openly wondering what they got wrong, with some going so far as to suggest that polling itself may not be as valuable as it used to be. As The New York Times put it, the pollsters were as much losers last Thursday as Ed Milliband and the Labour Party. At the very least, the miss has caused some to wonder what it might mean for polling here in the United States, and around the world.  Nearly as soon as the election results were final, the major British polling groups announced that they would reviewing the election to try to figure out what went wrong. That report should prove valuable, but so far the best explanation I’ve seen for why the polls failed so spectacularly comes from Mark Mellmen, a Democratic political strategist here in the U.S.:

So where did our cousins go wrong?

First, I believe they were operating on the wrong level of analysis. Their data were on one level and what they were trying to predict was on another. The polls were looking at the percentage of the national vote each party was earning, while analysis and reporting emphasized the number of seats each would receive in Parliament.

The whole U.K. polling enterprise is akin to predicting the number of House seats each U.S. party will get using only the generic vote.

Much of Britain’s shock after the votes were tallied derived from the fact that a relatively even horse race in the polls produced a large Conservative advantage. Single-member, first-past-the-post districts are designed to magnify relatively slender pluralities of the national vote.

The Tories got less than 37 percent of the vote but 51 percent of Parliamentary seats, whereas Labour picked up more than 30 percent of the vote but less than 36 percent of the seats.

Multiple parties complicate the picture, underlining the problem of extrapolating from votes to seats. The Scottish National Party (SNP) garnered 1.45 million votes, while the UK Independence Party (UKIP) got a much larger 3.88 million votes. The SNP snapped up 56 seats and the UKIP just one.

A second problem is one I’ve addressed before: undecideds. Britain’s national polls show that not a single voter was undecided.

It wasn’t so. British poll reports simply eliminate undecideds.

Dig through Lord Ashcroft’s final national poll, which gave Labour a 1-point edge, and you’ll find 9 percent undecided, 9 percent refusing to say how they would vote and 9 percent saying they would not vote at all. In response to a different question, 21 percent said they might well “end up voting differently” on Election Day — more than enough voters to transform what looked like a tie into a 7-point margin.

At least British pollsters would be wiping less egg from their faces had they been reporting a 27-27 tie with 18 percent undecided.

Third is the leadership question. Apparently confident that Britons follow the socially acceptable path in their country and vote the party, not the person, British pollsters pay relatively little attention to attitudes toward those atop the ticket. Perhaps they deserve more focus.

Just two days out, one poll that did inquire found voters evenly divided on Prime Minister David Cameron’s performance, while they were 12 points more negative than positive about Labour Leader Ed Miliband.

Among undecideds, the results were even more lopsided, with Cameron -3 and Miliband -26.

These strong preferences for Cameron should have entered into pollsters’ projections and should certainly have tipped them off that undecideds had opinions, and opinions quite hostile to Miliband.

Mellman’s first point is perhaps the most interesting one. For reasons that I think are rather obvious, British pollsters seem to limit their surveys to those conducted on a national level, basically asking their respondents which party’s candidate they intend to vote for in the General Election. This is a good way to determine what the sentiment of the country as a whole might be, and it would be a far stronger predictor of the outcome of an election if Great Britain had a political system where party control of Parliament were decided on a proportional basis based on the percentage of the vote that each party received nationally. That’s not the way things work in the U.K., though, and because of that the value of national polling in helping to predict how an election will go is reduced. Like the House of Representatives here the in the United States, membership in the House of Commons is determined by elections in individual constituencies and, unless each constituency is roughly a mirror is representative of the nation as a whole, nationwide polling isn’t going to tell you very much about how individual races. As Mellman notes, we have the same issue here in the U.S. with the Generic Congressional Ballot, which does a fine job of telling us where voter sentiment as a whole lies when it comes to who voters would like to see control the House, but does next to nothing to tell us what’s going on in each of the 435 individual Congressional races, many of which aren’t really competitive at all.

Mellman’s other two points, about how British pollsters were apparently essentially ignoring the undecided vote and other polling information that showed the public to be far more favorably inclined toward David Cameron than Ed Milliband, also seem quite significant. Failing to take into account the fact that there were large numbers of voters who were either undecided or likely to change their minds is somewhat baffling, for example. At the very least, the media would seem to deserve some of the blame here for only reporting the top line poll numbers and not digging deeper into the results for information that, at the very least, would have alerted people to take the top line numbers with a grain of salt. This is a problem that we’ve had in the United States as well, of course, with the political media focusing almost exclusively on which candidate is up or down by a few, largely meaningless, percentage points in a poll rather than taking a broader approach.

There are also other factors that were at play in Britain that helped to make the pre-election polling less reliable. Like the United States, polling companies over there seem to struggle at times with exactly how to factor the declining use of landline phones among younger voters into their analysis, for example. It also seems that the phenomenon of polling companies not wanting to stick their neck out with results that deviate from what other company’s are showing seems to be darn near universal. The upcoming review of the pre-election polling will likely also reveal some areas in need of changes. What this episode does, though, is to remind us yet again that, while polls are useful tools for understanding what’s going on in a race or how the public feels about a specific issue, they ought to be viewed at as just that, tools rather than the Oracle at Delphi. Unfortunately, it seems as though far too many people who observe and write about politics, including at times yours truly, tend to do far more of the latter.

FILED UNDER: Europe, Public Opinion Polls, , , , , , , , , , ,
Doug Mataconis
About Doug Mataconis
Doug Mataconis held a B.A. in Political Science from Rutgers University and J.D. from George Mason University School of Law. He joined the staff of OTB in May 2010 and contributed a staggering 16,483 posts before his retirement in January 2020. He passed far too young in July 2021.

Comments

  1. Trumwill says:

    It’s nice that we’ve moved on from “believing the polls might not be right is scientific denialism.”

    I’m leaning towards a combination of Shy Tory Effect and undecided-elimination. Survey Monkey got it right even though they weren’t really completely trying, and they did online polls. My guess is that phone respondents are doing this, saying that they’re undecided when they were actually leaning or decided for the Conservatives, and were thus eliminated from consideration.

  2. steve says:

    I had just assumed that Romney’s pollsters couldn’t find work here after their performance, and they had moved across the pond.

  3. DrDaveT says:

    British poll reports simply eliminate undecideds.

    Ah, so it was malpractice.

    Did these polls report margins of errors, along with point estimates? If they didn’t, that’s malpractice. If they did, but the margin of error didn’t factor in the various flavors of undecided, that’s malpractice.

    If they did report margins of error that correctly incorporated the number of undecideds observed, it’s still malpractice — by the journalists who just reported the point estimates. Given that being terrified by math seems to be a prerequisite for pursuing a career in journalism, this isn’t all that unlikely either.

  4. Hal_10000 says:

    @DrDaveT:

    This is why Silver always talks in probabilities, which includes the possibility that the polls are off. IN this case, a Tory majority was just outside his probability distribution.

  5. DrDaveT says:

    @Hal_10000:

    This is why Silver always talks in probabilities

    Sure. But there are degrees of malpractice. I suspect that Silver explicitly accounted for the undecideds, but not for the possible correlation between being undecided and leaning Tory. Or at least underestimated the correlation.

    (I remember Nate from before he got into elections, when we only argued about baseball…)

  6. I think part of the solution is to pay people a nominal fee to take the poll. It would help response rates, and the results are likely to be more accurate due to the psychology behind how most people react to reciprocity.

  7. Andre Kenji says:

    1-) Predicting the results of districts is much more difficult than predicting an Electoral College. Nate Silver does not work with House elections in the US.

    2-) Ed Miliband had negative favorability ratings, much higher than David Cameron. That´s the poll number that everyone ignored.