**Over at Equitable Growth:** I have always understood expected-utility decision theory to be normative, not positive: it is how people ought to behave if they want to achieve their goals in risky environments, not how people do behave. One of the chief purposes of teaching expected-utility decision theory is in fact to make people aware that they really should be risk neutral over small gambles where they do know the probabilities--that they will be happier and achieve more of their goals in the long run if they in fact do so. **READ MOAR**

Thus the first three things to teach people are:

- That they are not risk-neutral over small gambles.
- That elementary considerations of rationality in the sense of finding means to achieve one's ends require risk-neutrality over small gambles.
- Hence they should be risk-neutral over small gambles.

Then, of course, there is the fourth thing to teach people:

(4) When they are betting against other human minds, you should not be risk-averse over even small amounts--the fact that another mind is willing to take the opposite side of the bet tells you that your subjective probabilities are biased, and expected-utility decision theory based on your subjective probabilities does not incorporate that information about your biases, and so leads you astray.

Still open, however, is:

(5) Should you act as if you are risk-neutral for small gambles against nature if doing so makes you anxious and hence unhappy?

My view is that you owe it to yourself to *train yourself not to be anxious and unhappy with respect to small gambles against nature,* and thus train yourself to be risk-neutral with respect to small gambles against nature.

And then there is:

(6) Given that people aren't rational Bayesian expected utility-theory decision makers, what do economists think that they are doing modeling markets as if they are populated by agents who are? Here there are, I think, three answers:

Most economists are clueless, and have not thought about these issues at all.

Some economists think that we have developed cognitive institutions and routines in organizations that make organizations expected-utility-theory decision makers even though the individuals in utility theory are not. (Yeah, right: I find this very amusing too.)

Some economists admit that the failure of individuals to follow expected-utility decision theory and our inability to build institutions that properly compensate for our cognitive biases (cough, actively-managed mutual funds, anyone?) are one of the major sources of market failure in the world today--for one thing, they blow the efficient market hypothesis in finance sky-high.

The fact that so few economists are in the third camp--and that any economists are in the second camp--makes me agree 100% with Andrew Gelman's strictures on economics as akin to Ptolemaic astronomy, in which the fundamentals of the model are "not [first-order] approximations to something real, they’re just fictions..."

**Andrew Gelman:** Differences between econometrics and statistics: "Economists seem to like their models...

...and then give after-the-fact justification. My favorite example is modeling uncertainty aversion using a nonlinear utility function for money, in fact in many places risk aversion is

definedas a nonlinear utility function for money. This makes no sense on any reasonable scale... but economists continue to use it as their default model. This bothers me... like... doing astronomy with Ptolemy’s model and epicycles. The fundamentals of the model are not approximations to something real, they’re just fictions...

**Andrew Gelman** (1998): Some Class-Participations Demonstrations for Decision Theory and Bayesian Statistics: "**5. Utility of Money and Risk-Aversion**...

...To introduce the concept of utility, we ask each student to write on a sheet of paper the probability p

_{1}for which they are in different between (a)a a certain gain of $1, and (b) a gain of $1000000 with probability p_{1}or $0 with probability (1-p_{1}).... The students are then asked to write down, in sequence the probabilities p_{2}, p_{3}, p_{4}, and p_{5}, for which $1=p_{2}$10 + (1-p_{2})$0; $10=p_{3}$100 + (1-p_{3})$1; $100=p_{4}$1000 + (1-p_{4})$10; and $1000=p_{5}$1000000 + (1-p_{5})$100. One of the students is then brought the blackboard to get his or her answers to questions. The probabilities are checked for coherence... The questions involving piece to p_{2}and p_{3}or combine field comparison between $1, $100, $0. For example, suppose p_{2}=0.1 and p_{3}=0.15.... Then... U($1) = 0.064(U($100))+0.9836(U($0)). We then repeat this procedure using... p_{4}to determine the utility of $1 relative to $1000 and $0, and then once again using p_{5}to determine the utility of $1 relative to $1000000 and $0. finally, this derived values compared to the student's original value of p_{1}. is will disagree, meaning that the students preferences are incoherent. The students in the class then discuss with the student that the blackboard how... To give coherent unreasonable answers....A related demonstration.... a person is somewhat risk-averse and is indifferent between a certain gain of $10 and a 55% chance of $20 and a 45% chance of $0. similarly he or she is indifferent bring a certain gain of $20 and a 55% chance of $30 and a 45% chances of $10; and, in general, different between a certain gain of $x and a 55% chance of $10+x and a 45% chances of $x-10.

Is this reasonable? The students assent....

Then answer the following question: For what dollar value $y is this person in different between a certain game of $y and a 50% chance of $1000000000 and a 50% chance $0? The answer, surprisingly, is that Whitlers is between $30 and $40.... Setting U($0)=0 and U($10)=1... and evaluating... yields U($20)=1.818, U($30)=2.487.... U($40)=3.035.... U($1000000000)=5.5.... $y must be between $30 and $40.

The student believes each step of the argument but is unhappy with the conclusion. Where is the mistake? It is the theory uncertainty is not the same as "risk-aversion" in utility theory: the latter can be expressed as a concave utility function for money, whereas the former implies behavior that is not consistent with any utility function (see Kahneman and Tversky 1979). This is a good time to discuss cognitive illusions, many of which have been demonstrated the context of monetary gains and losses.... Is decision descriptive? Is it normatively appropriate?