Skip to main content

Would you want to be an expected utility maximizer

I have finally got around to reading Richard Thaler's fantastically wonderful book on Misbehaving. One thing that surprised me in the early chapters is how Thaler backs expected utility theory as the right way to think. Deviations from expected utility are then interpreted as humans not behaving 'as they should'. While I am familiar with this basic argument it still came as a surprise to me how firmly Thaler backed expected utility theory. And I'm not sure I buy this argument. 

To appreciate the issue consider some thought experiments. Thaler gives the following example:

Stanley mows his lawn every weekend and it gives him terrible hay fever. I ask Stan why he doesn't hire a kid to mow his lawn. Stan says he doesn't want to pay the $10. I ask Stan whether he would mow his neighbor's lawn for $20 and Stan says no, of course not.

From the point of view of expected utility theory Stan's behavior makes no sense. What we should do is calculate the recompense Stan needs to mow a lawn. That he will not pay the kid shows that he values $10 more than mowing a lawn. That he will not mow his neighbor's lawn shows that he does not value $20 more than mowing his lawn. But how can mowing a lawn be worth less than $10 and more than $20. It cannot! His choice makes no sense.

Here is another example (which I have adapted a bit):

Morgan got free tickets to a high profile NBA basketball game. Tickets are selling on the legal second hand market for $300. Morgan says that he is definitely going to the game. Asked if he would pay $200 for a ticket he says 'of course not'.

Again, from the point of expected utility theory Morgan's choices are nonsense. What we should do here is ask how much he values going to the game. He explicitly says that he would not be willing to pay $200. Yet by going to the game he misses out on selling his tickets for $300. So, it looks as though he values the game more than $300. But how can going to the game be worth less than $200 and more than $300. It cannot! His choices also make no sense. 

What to do with these two examples. The key question is this: Do you think Stanley and Morgan would change their choices if the 'irrationality' of their choices are explained to them? Personally, I think not. To them their behavior probably seems perfectly sensible, and who are we to argue against that? One response to this would be to 'add on' factors that influence preferences such as 'I prefer mowing my own lawn' or 'I don't like giving away tickets'. This can rescue expected utility theory but is precisely the kind of ad-hocness that behavioral economics is trying to get away from. So, I think it is just better to accept that expected utility is not necessarily the right way to make decisions.

Does this matter? Mainstream economics is built upon the premise that expected utility theory is a good representation of how people make decisions. The work of Thaler and others has blown that idea out of the water. So, whether or not expected utility is the right way to do things is rather a mute point. There is still merit in learning how a person should behave if their preferences satisfy certain basic axioms.  

Things are more complicated when we look at the heuristics and biases approach led by Daniel Kahneman and Amos Tversky. Here the biases suggests that there is a right way to do things! It is worth clarifying, however, that much of this work relates to probabilistic reasoning where there is a clearly defined right way of doing things. I suggest that we just have to be a bit careful extending the biases terminology to choice settings where there may not be a right way of doing things. For instance, is loss aversion a bias? Put another way, is it irrational to behave in one way when things are framed as losses and another way when the same choice is framed in terms of gains? It is certainly different to how economists have traditionally modeled things. But it still seems perfectly sensible to me (and may make evolutionary sense) that someone could be influenced by the frame. Maybe, therefore, we need to call it the status-quo-effect rather than status-quo-bias. This, though, is surely more of a matter of semantics rather than anything particularly substantive.

Things are also a bit complicated when we come to policy interventions. For instance, the nudge idea kind of suggests there is a right way of doing things that we can nudge people towards. But then there are so many situations where people make unambiguously bad choices (like saving for retirement) that we can still be excited about nudge without getting too caught up on whether expected utility theory is always the best way to make decisions. And remember the basic idea behind nudge is that we should never restrict choice anyway.

So, whether or not it is rational to maximize expected utility is probably not a big issue. It just means, to use Thaler's well known terminology, that: Humans may not be quite as dumb as Thaler claims, but they are undeniably very different to the Econs you find in a standard textbook.

Comments

Popular posts from this blog

Revealed preference, WARP, SARP and GARP

The basic idea behind revealed preference is incredibly simple: we try to infer something useful about a person's preferences by observing the choices they make. The topic, however, confuses many a student and academic alike, particularly when we get on to WARP, SARP and GARP. So, let us see if we can make some sense of it all.           In trying to explain revealed preference I want to draw on a  study  by James Andreoni and John Miller published in Econometrica . They look at people's willingness to share money with another person. Specifically subjects were given questions like:  Q1. Divide 60 tokens: Hold _____ at $1 each and Pass _____ at $1 each.  In this case there were 60 tokens to split and each token was worth $1. So, for example, if they held 40 tokens and passed 20 then they would get $40 and the other person $20. Consider another question: Q2. D...

Nash bargaining solution

Following the tragic death of John Nash in May I thought it would be good to explain some of his main contributions to game theory. Where better to start than the Nash bargaining solution. This is surely one of the most beautiful results in game theory and was completely unprecedented. All the more remarkable that Nash came up with the idea at the start of his graduate studies!          The Nash solution is a 'solution' to a two-person bargaining problem . To illustrate, suppose we have Adam and Beth bargaining over how to split some surplus. If they fail to reach agreement they get payoffs €a and €b respectively. The pair (a, b) is called the disagreement point . If they agree then they can achieve any pair of payoffs within some set F of feasible payoff points . I'll give some examples later. For the problem to be interesting we need there to be some point (A, B) in F such that A > a and B > b. In...

Prisoners dilemma or stag hunt

Over Christmas I had chance to read The Stag Hunt and the Evolution of Social Structure by Brian Skyrms. A nice read, very interesting and thought provoking. There’s a couple of things in the book that prompt further discussion. The one I want to focus on in this post is the distinction between the stag hunt game and the prisoners dilemma game.    To be sure what we are talking about, here is a specific version of both type of game. Adam and Eve independently need to decide whether to cooperate or defect. The payoff matrix details their payoff for any combination of choices, where the first number is the payoff of Adam and the second number the payoff of Eve. For example, in the Prisoners Dilemma, if Adam cooperates and Eve defects then Adam gets 65 and Eve gets 165. Prisoners Dilemma Eve Cooperate Defect Adam Cooperate 140, 140 65, 165 Defect 165,...