“It’s as if you went into a bathroom in a bar and saw a guy pissing on his shoes, and instead of thinking he has some problem with his aim, you suppose he has a positive utility for getting his shoes wet”
Statistical Modeling, Causal Inference, and Social Science 2014-09-10
The notion of a geocentric universe has come under criticism from Copernican astronomy. . . .
A couple months ago in a discussion of differences between econometrics and statistics, I alluded to the well-known fact that everyday uncertainty aversion can’t be explained by a declining marginal utility of money.
What really bothers me—it’s been bothering me for decades now—is that this is a simple fact that “everybody knows” (indeed, in comments some people asked why I was making such a big deal about this triviality), but, even so, it remains standard practice within economics to use this declining-marginal-utility explanation.
I don’t have any econ textbooks handy but here’s something from the Wikipedia entry for risk aversion:
Risk aversion is the reluctance of a person to accept a bargain with an uncertain payoff rather than another bargain with a more certain, but possibly lower, expected payoff.
OK so far. And now for their example:
A person is given the choice between two scenarios, one with a guaranteed payoff and one without. In the guaranteed scenario, the person receives $50. In the uncertain scenario, a coin is flipped to decide whether the person receives $100 or nothing. The expected payoff for both scenarios is $50, meaning that an individual who was insensitive to risk would not care whether they took the guaranteed payment or the gamble. However, individuals may have different risk attitudes. A person is said to be:
risk-averse (or risk-avoiding) – if he or she would accept a certain payment (certainty equivalent) of less than $50 (for example, $40), rather than taking the gamble and possibly receiving nothing. . . .
They follow up by defining risk aversion in terms of the utility of money:
The expected utility of the above bet (with a 50% chance of receiving 100 and a 50% chance of receiving 0) is, E(u)=(u(0)+u(100))/2, and if the person has the utility function with u(0)=0, u(40)=5, and u(100)=10 then the expected utility of the bet equals 5, which is the same as the known utility of the amount 40. Hence the certainty equivalent is 40.
But this is just wrong. It’s not mathematically wrong but it’s wrong in any practical sense, in that a utility function that curves this way between 0 and 100 can’t possibly make any real-world sense.
Way down on the page there’s one paragraph saying that this model has “come under criticism from behavioral economics.”
But this completely misses the point!
It would be as if you went to the Wikipedia entry on planetary orbits and saw a long and involved discussion of the Ptolemaic model, with much discussion of the modern theory of epicycles (image above from Wikipedia, taken from the Astronomy article in the first edition of the Enyclopaedia Brittanica), and then, way down on the page, a paragraph saying something like,
The notion of a geocentric universe has come under criticism from Copernican astronomy.
Again, this is frustrating because it’s so simple, it’s so obvious that any utility function that curves so much between 0 and 100 can’t keep going forward in any reasonable sense.
It’s an example I used to give as a class-participation activity in my undergraduate decision analysis class and which I wrote up a few years later in an article on classroom demonstrations.
I’m not claiming any special originality for this result. As I wrote in my recent post,
The general principle has been well-known forever, I’m sure.
Indeed, unbeknownst to me, Matt Rabin published a paper a couple years later with a more formal treatment of the same topic, and I don’t recall ever talking with him about the problem (nor was it covered in Mr. Cutlip’s economics class in 11th grade), so I assume he figured it out on his own. (It would be hard for me to imagine someone thinking hard about curving utility functions and not realizing they can’t explain everyday risk aversion.)
In response, commenter Megan agreed with me on the substance but wrote:
I am sure it has NOT been well-known forever. It’s only been known for 26 years and no one really understands it yet.
I’m pretty sure the Swedish philosopher who proved the mathematical phenomenon 10 years before you and 12 years before Matt Rabin was the first to identify it. The Hansson (1988)/Gelman (1998)/Rabin (2000) paradox is up there with Ellsberg (1961), Samuelson (1963) and Allais (1953).
Not so obvious after all?
Megain’s comment got me thinking: maybe this problem with using a nonlinear utility function for money is not so inherently obvious. Sure, it was obvious to me in 1992 or so when I was teaching decision analysis, but I was a product of my time. Had I taught the course in 1983, maybe the idea wouldn’t have come to me at all.
Let me retrace my thoughts, as best as I can now recall them. What I’d really like is a copy of my lecture notes from 1992 or 1994 or whenever it was that I first used the example, to see how it came up. But I can’t locate these notes right now. As I recall, I taught the first part of my decision analysis class using standard utility theory, first having students solve basic expected-monetary-value optimization problems and then going through the derivation of the utility function given the utility axioms. Then I talked about violations of the axioms and went on from there.
It was a fun course and I taught it several times, at Berkeley and at Columbia. Actually, the first time I taught the subject it was something of an accident. Berkeley had an undergraduate course on Bayesian statistics that David Blackwell had formerly taught. He had retired so they asked me to teach it. But I wasn’t comfortable teaching Bayesian statistics at the undergraduate level—this was before Stan and it seemed to me it would take the students all semester just to get up to speed on the math, with on time to do anything interesting—so I decided to teach decision analysis instead. using the same course number. One particular year I remember—I think it was 1994—when we had a really fun bunch of undergrad stat majors, and a whole bunch of them were in the course. A truly charming bunch of students.
Anyway, when designing the course I read through a bunch of textbooks on decision analysis, and the nonlinear utility function for money always came up as the first step beyond “expected monetary value.” After that came utility of multidimensional assets (the famous example of the value of a washer and a dryer, compared to two washers or two dryers), but the nonlinear utility for money, used sometimes to define risk aversion, came first.
But the authors of many of these books were also aware of the Kahneman, Slovic, and Tversky revolution. There was a ferment, but it still seemed like utility theory was tweakable and that the “heuristics and biases” research merely reflected a difficulty in measuring the relevant subjective probabilities and utilities. It was only a few years later that a book came out with the beautifully on-target title, “The Construction of Preference.”
Anyway, here’s the point. Maybe the problem with utility theory in this context was obvious to Hansson, and to me, and to Yitzhak, because we’d been primed by reading the work by Kahneman, Slovic, Tversky, and others exploring the failures of the utility model in practice. In retrospect, that work too should not have been a surprise—-after all, utility theory was at that time already a half-century old and it had been developed in the behavioristic tradition of psychology, predating the cognitive revolution of the 1950s.
I can’t really say, but it does seem that sometimes the time is ripe for an idea, and maybe this particular idea only seemed so trivial to me because it was already accepted that utility theory had problems modeling preferences. Once you accept the empirical problem, it’s not so hard to imagine there’s a theoretical problem too.
And, make no doubt about it, the problem is both empirical and theoretical. You don’t need any experimental data at all to see the problem here:
Also, let me emphasize that the solution to the problem is not to say that people’s preferences are correct and so the utility model is wrong. Rather, in this example I find utility theory to be useful in demonstrating why the sort of everyday risk aversion exhibited by typical students (and survey respondents) does not make financial sense. Utility theory is an excellent normative model here.
Which is why it seems particularly silly to be defining these preferences in terms of a nonlinear utility curve that could never be.
It’s as if you went into a bathroom in a bar and saw a guy pissing on his shoes, and instead of thinking he has some problem with his aim, you suppose he has a positive utility for getting his shoes wet.
The post “It’s as if you went into a bathroom in a bar and saw a guy pissing on his shoes, and instead of thinking he has some problem with his aim, you suppose he has a positive utility for getting his shoes wet” appeared first on Statistical Modeling, Causal Inference, and Social Science.