Randall Munroe Explains Why There Are No Rich Frequentist Statisticians: New Yorker #fail Vulnerable to a Dutch Book Weblogging
Randall Munroe:
Gary Marcus and Ernest Davis:
Bayesian Statistics and what Nate Silver Gets Wrong: The Bayesian approach is much less helpful when there is no consensus about what the prior probabilities should be…. In actual practice, the method of evaluation most scientists use most of the time is a variant of a technique proposed by the statistician Ronald Fisher in the early 1900s. Roughly speaking, in this approach, a hypothesis is considered validated by data only if the data pass a test that would be failed ninety-five or ninety-nine per cent of the time if the data were generated randomly. The advantage of Fisher’s approach (which is by no means perfect) is that to some degree it sidesteps the problem of estimating priors where no sufficient advance information exists. In the vast majority of scientific papers, Fisher’s statistics (and more sophisticated statistics in that tradition) are used.
Why oh why can't we have a better press corps?
Those of us economists who grumble about Bayesian statistics don't want us to move backward to Frequentism but forward to (prior probabilities) x (value of being right) -- the decision-theory counterparts of the so-called risk-neutral valuation method in finance...
Those of us economists who do use frequentist statistics know something very important that Marcus and Davis show no sign at all of knowing: that if you fail to reject the null hypothesis at 0.05, you do not conclude (even provisionally) that the null hypothesis is true--instead, you go gather more data and test again until you can either reject the null against the alternative or reject (the interesting) alternative(s) against the null.
That's science!!