Robert Waldmann sends me to A.C. Thomas, who demonstrates that Nate Silver absolutely nailed it this election. 84% of the time the state presidential vote result was in what Nate Silver thought the central 80% of the distribution would be. You don't get better than that. By contrast, A.C. Thomas says that the outcomes were in what Drew Linzer thought would be the central 80% of the distribution only 56% of the time. Linzer--like Silver--got the point estimates right, but he got his confidence intervals too tight.
538's Uncertainty Estimates Are As Good As They Get: Nate Silver will continue to be the world's most famous meta-analyst… even though several of his peers, such as the Princeton Election Consortium, Votamatic and Simon Jackman's projections for the Huffington Post, seemed to do equally well. The strength and depth of the number of polls in swing states no doubt had a lot to do with all their successes.
How much of an accomplishment this is, of course, depends on context; the winner in most states was easily predicted ahead of time with the barest minimum of polling. Consider instead a related question: how close were the vote shares in each state to the prediction, as a function of the margin of error? The simplest way to check this is to calculate a p-value for each prediction…. I grabbed the estimates from FiveThirtyEight and Votamatic (at this time, I have only estimates, not uncertainties, for PEC or HuffPost) and calculated the respective p-values assuming a normal distribution…. [T]he 538 distribution is nearly uniform…. [F]or Votamatic… predictions turned out to be too overly precise.