Grading the Pollsters for the 2006 Elections

There were some interesting trends: Phone polls tended to be better than online surveys, and companies that used recorded voices rather than live humans in their surveys were standouts. Nearly everyone had some big misses, though, such as predicting that races would be too close to call when in fact they were won by healthy margins. Also, I found that being loyal to a particular polling outfit may not be wise. Taking an average of the five most recent polls for a given state, regardless of the author — a measure compiled by Pollster.com — yielded a higher accuracy rate than most individual pollsters.

The only thing surprising here is that recorded voices did better than live humans. Given the number of pollsters considered here, though, I’m not sure there’s enough data to make a judgment on this issue. Smart pollsters already ensure that they use call centers headquartered in places to more-or-less ensure that the voices people hear are accent free. It’s hard to believe that robot callers–on whom I immediately hang up–are more likely to get a response.

Otherwise, the results are as expected. Indeed, it’s amazing that Internet polls come even close to being accurate, since they are by definition self-selected samples. Telephone surveys, with their high hang-up rates, could theoretically wind up that way but careful replacement of hang-ups with others in the same demographic block should cancel that out.

On to the results: In the Senate races, the average error on the margin of victory was tightly bunched for all the phone polls. Rasmussen (25 races) and Mason-Dixon (15) each were off by an average of fewer than four points on the margin. Zogby’s phone polls (10) and SurveyUSA (18) each missed by slightly more than four points. Just four of the 68 phone polls missed by 10 points or more, with the widest miss at 18 points.

But the performance of Zogby Interactive, the unit that conducts surveys online, demonstrates the dubious value of judging polls only by whether they pick winners correctly. As Zogby noted in a press release, its online polls identified 18 of 19 Senate winners correctly. But its predictions missed by an average of 8.6 percentage points in those polls — at least twice the average miss of four other polling operations I examined. Zogby predicted a nine-point win for Democrat Herb Kohl in Wisconsin; he won by 37 points. Democrat Maria Cantwell was expected to win by four points in Washington; she won by 17. (Zogby cooperated with WSJ.com on an online polling project that tracked some Senate and gubernatorial races.)

The picture was similar in the gubernatorial races (where Zogby polled only online, not by phone). Mason-Dixon’s average error was under 3.4 points in 14 races. Rasmussen missed by an average of 3.8 points in 30 races; SurveyUSA was off by 4.4 points, on average, in 18 races. But Zogby’s online poll missed by an average of 8.3 points, erring on six races by more than 15 points.

Zogby’s online polls “just blew it” in Colorado and Arkansas governor races, Chief Executive John Zogby told me. (See Zogby’s scorecard.) In other races, such as the two Senate races I mentioned, “we had the right direction but a closer race than the final.” One explanation, he said, may be that Zogby’s final online polls collected responses one to two weeks before the election, whereas other polling firms were active until the final week. “We have more work to do” to improve online polling, Mr. Zogby said, but he added, “we believe it’s not only the wave of the future, but the future is very close to now.”

So much for Zogby’s bragging.

FILED UNDER: 2006 Election, Public Opinion Polls, , , , , , , ,
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. madmatt says:

    But they all did better than Rove who had the #’s …

  2. Michael says:

    It’s hard to believe that robot callers—on whom I immediately hang up—are more likely to get a response.

    The article is stating that robot-calls were more accurate, not more plentiful, than human conducted ones. The common wisdom in the industry is that respondents are more open and honest to a computer than they are to a human, and this seems to enforce that idea.