2012 Pollster Rankings: Rasmussen And Gallup Among The Least Accurate
Nate Silver is out with his traditional review of the performance of prominent pollsters and, as with the survey I cited in the aftermath of Election Day, Rassmussen Reports and Gallup ended up faring the worst:
Among telephone-based polling firms that conducted a significant number of state-by-state surveys, the best results came from CNN, Mellman and Grove Insight. The latter two conducted most of their polls on behalf of liberal-leaning organizations. However, as I mentioned, since the polling consensus underestimated Mr. Obama’s performance somewhat, the polls that seemed to be Democratic-leaning often came closest to the mark.
Several polling firms got notably poor results, on the other hand. For the second consecutive election — the same was true in 2010 — Rasmussen Reports polls had a statistical bias toward Republicans, overestimating Mr. Romney’s performance by about four percentage points, on average. Polls by American Research Group and Mason-Dixon also largely missed the mark. Mason-Dixon might be given a pass since it has a decent track record over the longer term, while American Research Group has long been unreliable.
It was one of the best-known polling firms, however, that had among the worst results. In late October, Gallup consistently showed Mr. Romney ahead by about six percentage points among likely voters, far different from the average of other surveys. Gallup’s final poll of the election, which had Mr. Romney up by one point, was slightly better, but still identified the wrong winner in the election. Gallup has now had three poor elections in a row. In 2008, their polls overestimated Mr. Obama’s performance, while in 2010, they overestimated how well Republicans would do in the race for the United States House.
Among the other pollsters that ranked low in accuracy this cycle were American Research Group, which has had a spotty reputation for years now, and Mason-Dixon, which has generally been well regarded but seemed to always come back as an outlier in 2012 for some reason. The fact that Rasmussen is on the list is no surprise, of course. The company had generally done well in 2004, and in 2008 it was among the most accurate pollsters out there. By 2010, though, its Republican “house effect” became far more pronounced to the point where it was at the bottom of Silver’s accuracy ratings. The same thing happened this year, as anyone observing the polls could have told you. The main reason for this, of course, is Rasmussen’s decision to weight its polls for Party ID based on a model that assumes, contrary to the findings in most polls, that there are more Republicans in the country than Democrats. As long as the polls operate with that assumption, they are going to be consistently unreliable.
Interestingly, Silver’s review of the polls found a surprising degree of accuracy from a form of polling that most of us have been dismissing for years, online polling:
[S]ome of the most accurate firms were those that conducted their polls online.
The final poll conducted by Google Consumer Surveys had Mr. Obama ahead in the national popular vote by 2.3 percentage points – very close to his actual margin, which was 2.6 percentage points based on ballots counted through Saturday morning.
Ipsos, which conducted online polls for Reuters, came close to the actual results in most places that it surveyed, as did the Canadian online polling firm Angus Reid. Another online polling firm, YouGov, got reasonably good results.
Among the nine polling firms that conducted their polls wholly or partially online, the average error in calling the election result was 2.1 percentage points. That compares with a 3.5-point error for polling firms that used live telephone interviewers, and 5.0 points for “robopolls” that conducted their surveys by automated script. The traditional telephone polls had a slight Republican bias on the whole, while the robopolls often had a significant Republican bias. (Even the automated polling firm Public Policy Polling, which often polls for liberal and Democratic clients, projected results that were slightly more favorable for Mr. Romney than what he actually achieved.) The online polls had little overall bias, however.
The difference between the performance of live telephone polls and the automated polls may partly reflect the fact that many of the live telephone polls call cellphones along with landlines, while few of the automated surveys do. (Legal restrictions prohibit automated calls to cellphones under many circumstances.)
It’s worth noting that the type of online polling we’re talking about here is far different from the “polls” that you often seen getting thrown up on news sites and blogs where anyone who happens by can respond, sometimes more than once, if the site isn’t thorough enough in preventing multiple responses from a single person. These are polls conducted online where the pollsters are trying to come up with the same kind of representative samples that telephone pollsters have worked at putting together. The fact that they ended up being more accurate than telephone pollsters suggests strongly that the skepticism that I and others have expressed about this form of polling may no longer be warranted and that these types of polls deserve to be taken more seriously in the future.
One implication of this findings regarding online polls could be the fact that it provides a possible future solution to the problems that many pollsters have reported in recent years regarding their declining response rate. People may be more willing to participate in an online polling panel than they would be to spend 15 or 20 minutes answering questions about politics while their trying to get dinner ready in the evening. That doesn’t mean that we’ll see an end to telephone polling altogether. As Silver notes, such polls continue to serve a purpose and will continue to do so in the future, at least as far as live telephone polls that call both landlines and cell phones are concerned. The numbers above make clear, though, that automated polls such as those conducted by Rasmussen and Public Policy Polling tend to be among the least reliable polling methods. While it’s unlikely that pollsters will abandoned robo-dialing entirely because of the fact that its far cheaper than live interviews, it’s possible that we’ll see some of these robo-pollsters make a transition to online polling as that field becomes more and more refined. Until then, it seems rather clear that we need to discount the reliability of such polls, and to start taking those scientifically conducted online polls far more seriously.