• Facebook
  • Twitter
  • Subscribe
  • RSS

UnSkewed No More

To his credit, Dean Chambers has published a column that flatly states that he was wrong and in which he congratulates (although still does not apologize to) Nate Silver.

This quote struck me, however, for a couple of reasons:

UnSkewedPolls.com is just one web site and one project I did, and depending on your point of view it was proven wrong, it has run its course or it will fade away. It is merely a web site and only part of what I do and will do. There will be other web sites, and the more important web sites I work with will remain in place.

First:  “depending on your point over view it was proven wrong”?

Second:  while he is certainly right that the site is toast, I think he is downplaying the whole situation.  It is clear that he was poised to be crowned a right wing pundit demigod had he been right.  He was getting attention from people like Rush Limbaugh (and didn’t the site lead to the Examiner gig?).  I think he is radically underplaying where he thought he was going and where he now is.

Related Posts:

About Steven L. Taylor
Steven L. Taylor is Professor and Chair of Political Science at Troy University. His main areas of expertise include parties, elections, and the institutional design of democracies. He is the author of Voting Amid Violence: Electoral Democracy in Colombia and is currently working on a comparative study of the US to 29 other democracies. He earned his Ph.D. from the University of Texas and his BA from the University of California, Irvine. He has been blogging at PoliBlog since 2003. Follow Steven on Twitter

Comments

  1. john personna says:

    At a Scientific American blog:

    Why Math is Like the Honey Badger: Nate Silver Ascendant

    Like or Dislike: Thumb up 2 Thumb down 0

  2. Anon says:

    He writes:

    What I was wrong on was the concept that the voter turnout would show something close to even in percentages of Democrats and Republicans who actually voted in the election.

    Well, actually he was wrong in failing to understand that party ID changes, and thus cannot be used as a basis for poststratification sampling.

    More fundamentally, this is symptomatic of the anti-intellectualism/anti-expertise that runs through the current Republican base. No need to be an expert or have taken a stats class to do polling. Any Schmoe who knows how to work with percentages can do a better job than those so-called “experts”.

    It is remarkable how many Republicans believed Chamber’s uncritically. That doesn’t mean that someone must have a fancy degree in order to challenge the “experts”. But if you are going to do so, you better make damn sure that you have your ducks in a row, that you’ve taken the time to learn the subject, and that you listen very carefully when they tell you why you are wrong.

    I was actually a bit surprised that no established Republican pollster stepped up and explained why Chamber’s was full of it. I’m sure that there are many Republican pollsters who were aware of this. Were they afraid of the backlash? Better to stay silent, I guess.

    Like or Dislike: Thumb up 3 Thumb down 0

  3. Anon says:

    Regarding Silver, as Joyner has mentioned, getting the predictions
    right the night before the election is not really anything special.
    If I understand correctly what Silver’s percentages mean, to
    really show that Silver’s model is good someone would have to
    demonstrate that the percentages match some time prior to the outcomes.

    In other words, say that for a candidate, one week before the election,
    Silver predicts that that candidate has a 75% chance of winning. To be
    correct, that means that that candidate must also lose in some
    counterfactual histories. Obviously, there is no way to confirm this
    for a single candidate, but it should be possible to confirm this given
    enough candidates.

    Say that there were 100 candidates and one week
    before the (independent) elections, Silver predicted that all of them had an 80%
    chance of winning. That means that if Silver’s model is good, about 20
    of those candidates (on average) should actually lose.

    Like or Dislike: Thumb up 1 Thumb down 8

  4. Console says:

    @Anon:

    That’s what bugs me the most. The complete lack of anything resembling a scientific mind with regards to this stuff. Even the media got in on it, and the polls turned into “one side is making one set of assumptions and the other side is making another.” But that wasn’t the case, one side was extrapolating from the polling data, and the other side was using pre-made conclusions to manipulate the polling data. The latter is fundamentally unscientific. The failure to grasp this concept simply drove me nuts. But the crazy thing is, I can’t tell if it’s conservatives being irrational, or if conservatism has truly become a postmodern movement.

    As for Silver, I think he’s more than earned his praise because his models don’t stop at who’s going to win. He gives odds for states, a popular vote number, and an electoral count. Plus his call on Florida shows why it isn’t enough to just average polls together. Realclearpolitics had Florida at +1.5 for Romney. Silver’s model had it as a tossup but just barely favoring Obama.

    Like or Dislike: Thumb up 3 Thumb down 0

  5. Anon says:

    @Console: Silver is definitely doing more than simple averaging, but he himself is humble about the difficulty of the final predictions. See his appearance on Colbert. Also, note that Pollster also called each state correctly. I like Silver, but I think people miss his most interesting and substantive contribution, which are his predictions substantially before the outcomes. Unfortunately, I don’t know of any public information about how accurate those turned out to be. In other words, when he predicted probability P of winning N weeks before the outcome, how accurate did that turn out to be?

    Like or Dislike: Thumb up 0 Thumb down 1

  6. Murray says:

    ” getting the predictions right the night before the election is not really anything special.”

    And that is why some got it terribly wrong?

    Only good models get it right.

    Like or Dislike: Thumb up 1 Thumb down 0

  7. john personna says:

    @Anon:

    “Math don’t give a s$%.”

    Like or Dislike: Thumb up 1 Thumb down 0

  8. Anon says:

    @Murray: If I can substitute “competent” for “good”, then I agree. Only competent models got it right. (The only incompetent one that I know is Chamber’s model.) Note that even a simple average of polls such as RCP got it right except for FL, which even Silver admitted was a toss-up. In 2008, Silver also missed a close state. Simon Jackman (Pollster.com) got 51/51 this time. Linzer (votamatic) also got 51/51.

    Is Silver competent? Of course. How many people in the world could produce predictions just as accurately the night before the election? I believe that a significant fraction of those with graduate training in statistics could do it. (According to this document there are thousands of those in the US alone.) On Colbert’s show, Silver himself seems to agree that it isn’t that hard. He says that you take an average (in many cases it is a weighted average) of the polls, and count to 270. You don’t need to be Galileo (his choice of historical figure, not mine).

    Note that among the competent models there are still differences, and it may turn out that Silver’s model is indeed better. However, just getting 50/51 in 2008 and 51/51 in 2012 isn’t evidence that his model is better than the other competent ones. For that, it’s going to require more analysis, which I haven’t seen yet.

    For example, I haven’t seen any analysis of the accuracy of the probabilities that he publishes. When he says one week before an election that a candidate has probability P of winning, how accurate is that P? That’s what would be really interesting to know.

    Like or Dislike: Thumb up 0 Thumb down 1

  9. swearyanthony says:

    Sorry but until Chambers posts a thoughtful and heartfelt apology for his vicious and homophobic attack, he can die in a fire. He can make his own models all he likes but that piece of his was utterly vile.

    Like or Dislike: Thumb up 3 Thumb down 0

  10. @Anon: I think an important point needs to be made: the ral attacks were not on the models themselves, but on the actual data.

    Chamber’s problem was that he did not trust the polling. He was certain that the samples were wrong, so he re-weighed them.

    Really Silver was just the messenger that was attacked. The unskewed types were actually attacking pollsters.

    Like or Dislike: Thumb up 0 Thumb down 0

  11. Geek, Esq. says:

    When will Scott Rasmussen and Neil Newhouse publish similar mea culpas for misleading Republicans and conservatives in general? In Newhouse’s case, he bamboozled the Republican party’s financial sponsors.

    Oops.

    Rasmussen has already dropped hints he’s leaving the telephone polling business. But, he was always known as a hack.

    Newhouse is a more spectacular fall from grace–from one of the most respected GOP pollsters to a gilded Dean Chambers, a Dick Morris-style con man.

    Like or Dislike: Thumb up 0 Thumb down 0

  12. Geek, Esq. says:

    @Murray:

    Or common sense. Two observations would have helped someone pick the race:

    1) In a normally contested election (i.e. not 1984 for instance) not every close race will break the same way. So, the question is which close races will break which ways.

    2) Obama had a vastly superior information, data crunching, voter identification, voter registration, and voter mobilization organization. Vastly. Which makes for a useful tie-breaker.

    The state polls through Monday showed that North Carolina was the most problematic close state for Obama. Every other swing state but Florida was at worst a tie for Obama, and his ground game broke the tie in all of them with room to spare. In general, the movement over the weekend was towards Obama (I went from nervously picking 281 with upside to 303 EVs for Obama here to thinking it was going to be 332, especially given the national poll movement).

    In the case of Florida, the polls cut every so slightly against Obama, with a few showing him ahead but more showing him behind. But, his GOTV effort there helped him overperform the polls.

    Models are best used to supplement and inform, not replace, actual analysis.

    Like or Dislike: Thumb up 0 Thumb down 0

  13. Rob says:

    @Anon:

    You really need to go back to stats class… thats not how an 80%-20% probility of winning works…

    Like or Dislike: Thumb up 0 Thumb down 0