The old political journalism was one based on the active pursuit of misinformation. The Gallup poll swings from +4% to +1% and back to +4%--and National Journal Hotline writes breathless stories about what happened. Rasmussen says +3% and Pew says -1%--and Politico writes earnestly about how savvy insiders disagree. All that is going on is that with the small size of polls, especially tracking polls, sampling error and the house effects it shields have their day.
Thus we have a world in which Peggy Noonan writes easily about how magic dolphins have told her that Romney will win, Joe Klein is held up as an authority on the effect of Sandy on Pennsylvania turnout and thus margin, Karen Tumulty seeks out Bill Galston and Mark Mackinnon as experts on the likelihood of a popular-electoral vote split, David Brooks denies the possibility of forecasting in a world in which anything can happen, Joe Scarborough dumps on the idea of an odds ratio, Ramesh Ponnuru asks "who are you going to trust: the voters as interviewed by the pollsters or the Romney campaign operatives who talk to me?", George Will knows that he knows much less than Michael Barone but doesn't think he can say so and so decides to break with Michael by calling Minnesota for Romney, Michael Barone calls Pennsylvania and Wisconsin for Romney and afterwards claims that his prediction was "reasonable, just as other predictions that either Obama or Romney would win, or would win with more than 300 electoral votes, were reasonable; you could look at the polling data… and come up with pretty different conclusions".
Jeff Bercovici says that that day of mindless incoherent blathering is over:
Three Lessons From The Nate Silver Controversy: Nate Silver has emerged from the election with his model and methods vindicated…. Here are a few takeaways from the whole 538 vs. the Pundits debate:
- The modelers are here to stay. Get used to it, pundits! The electoral modeling done by Silver and his fellow polling aggregators such as Drew Linzer and Sam Wang (who were also largely correct) provides a straightforward way to accurately assess the state of the horserace…. By 2016, everybody – media, campaigns, 15-year-old coders – is going to be doing this.
- Elections are less surprising than most of us think. I gave Silver’s numbers a lot of credence. But having covered campaigns in the past, I nursed some lingering skepticism, because close elections always have an element of surprise, right? Were Obama’s chances really over 90% going into election day? Could all those narrow leads in swing states consistently hold up and ultimately translate to victory? Well, yes: with lots of polling data, especially at the state level, you can produce an uncannily accurate prediction…. This suggests a basic truism of politics is simply wrong: we assume big uncertainties and volatility in elections that were never really there.
- Political expertise must – and will – be redefined. The silly debate over Silver’s model showed a political, pundit and media class… ignorance and fear masked by loud harumphing…. Peggy Noonan was seriously arguing that a mild proliferation of Romney yard signs she observed in liberal Northwest Washington, plus magical vibrations of true Americanness emanating from all over, were signs of imminent Romney victory. Other Silver critics seemed not to understand basic statistical concepts. Others tried to change the subject. Some political reporters simply ignored the existence of polling averages altogether, pumping up the uncertainties of divergent poll results into epistemological voids. I don’t have high hopes in the short run. But people want Silver’s kind of knowledge; it generates huge traffic to the New York Times site. Media organizations understand that, if nothing else. In the long run political journalists and campaign consultants are going to need to know rudimentary statistics; they will be referring to models as a baseline for the state of a given race. And, one hopes, fact-checking those random gut feelings about a candidate’s “momentum.”