Skip to main content

Measuring risk aversion the Holt and Laury way

Attitudes to risk are a key ingredient in most economic decision making. It is vital, therefore, that we have some understanding of the distribution of risk preferences in the population. And ideally we need a simple way of eliciting risk preferences that can be used in the lab or field. Charles Holt and Susan Laury set out one way of doing in this in their 2002 paper 'Risk aversion and incentive effects'. While plenty of other ways of measuring risk aversion have been devised over the years I think it is safe to say that the Holt and Laury approach is the most commonly used (as the near 4000 citations to their paper testifies). 
         The basic approach taken by Holt and Laury is to offer an individual 10 choices like those in the table below. For each of the 10 choices the individual has to go for option A or option B. Most people go for option A in choice 1. And everyone should go for option B in choice 10. At some point, therefore, we expect the individual to switch from choosing option A to option B. The point at which they switch can be used as a measure of risk aversion. Someone who switches early is risk loving while someone who switches later is risk averse.



To properly measure risk aversion we do, though, need to fit choices to a utility function. This is where things get a little tricky. In the simplest case we would be able to express preferences using a constant relative risk aversion (CRRA) utility function 


where x is money. I can come back to why this is the simplest case shortly. First, let's have a quick look how it works. Suppose that someone chooses option A for choice 4. Then we can infer that

It is then a case of finding for what values of r this inequality holds. And it does for r less than or equal to around -0.15. Suppose the person chooses option B for choice 5. Then we know that

This time we get r grater than or equal to around 0.15. So, A person who switches between questions 4 and 5 has a relative risk aversion of between -0.15 and 0.15.
          Let us return now to the claim that a CRRA function keeps things simple. The dollar amounts in the choices above are small. What happens if we make them bigger? Holt and Laury tried multiplying them by 20, 50 and 90. Note that by the time we get to multiplying by 90 the amounts are up to $180 which is quite a lot of money for an experiment. If the CRRA utility function accurately describes preferences then people should behave exactly the same no matter how big the stakes. This would be ideal. Holt and Laury found, however, that people were far more likely to choose option A when the stakes were larger. Which means the CRRA function did not well describe subjects choices. 
           So, what does the rejection of CRRA mean? It tells us that just asking someone to make the 10 choices above is not enough to discern their preferences for risk. We learn what they would do for those magnitudes of money but cannot extrapolate from that to larger amounts. We cannot, for instance, say that someone is risk averse or risk loving because that person might appear risk loving for gambles over small amounts of money and risk averse for larger amounts. To fully estimate risk preferences we need to elicit choices over gambles with varying magnitudes of money.
       Despite all this, it is pretty standard to run the Holt and Laury approach at the end of experiments. The basic goal of doing so is to see if behavior in the experiment, say on public goods, correlates with attitudes to risk. Note that the simplicity of the Holt and Laury approach is a real draw here because you don't want to add something overly complicated to the end of an experiment. Care, though, is needed in interpreting results. As we have seen the Holt and Laury approach is not enough to parameterize preferences. All we can basically infer, therefore, is that one subject is relatively more or less risk averse or loving than another. This, though, is informative as a rough measure of how attitudes to risk influence behavior. Key, therefore, is to focus on relative rather than absolute comparisons.

Comments

Popular posts from this blog

Revealed preference, WARP, SARP and GARP

The basic idea behind revealed preference is incredibly simple: we try to infer something useful about a person's preferences by observing the choices they make. The topic, however, confuses many a student and academic alike, particularly when we get on to WARP, SARP and GARP. So, let us see if we can make some sense of it all.           In trying to explain revealed preference I want to draw on a  study  by James Andreoni and John Miller published in Econometrica . They look at people's willingness to share money with another person. Specifically subjects were given questions like:  Q1. Divide 60 tokens: Hold _____ at $1 each and Pass _____ at $1 each.  In this case there were 60 tokens to split and each token was worth $1. So, for example, if they held 40 tokens and passed 20 then they would get $40 and the other person $20. Consider another question: Q2. D...

Nash bargaining solution

Following the tragic death of John Nash in May I thought it would be good to explain some of his main contributions to game theory. Where better to start than the Nash bargaining solution. This is surely one of the most beautiful results in game theory and was completely unprecedented. All the more remarkable that Nash came up with the idea at the start of his graduate studies!          The Nash solution is a 'solution' to a two-person bargaining problem . To illustrate, suppose we have Adam and Beth bargaining over how to split some surplus. If they fail to reach agreement they get payoffs €a and €b respectively. The pair (a, b) is called the disagreement point . If they agree then they can achieve any pair of payoffs within some set F of feasible payoff points . I'll give some examples later. For the problem to be interesting we need there to be some point (A, B) in F such that A > a and B > b. In...

Prisoners dilemma or stag hunt

Over Christmas I had chance to read The Stag Hunt and the Evolution of Social Structure by Brian Skyrms. A nice read, very interesting and thought provoking. There’s a couple of things in the book that prompt further discussion. The one I want to focus on in this post is the distinction between the stag hunt game and the prisoners dilemma game.    To be sure what we are talking about, here is a specific version of both type of game. Adam and Eve independently need to decide whether to cooperate or defect. The payoff matrix details their payoff for any combination of choices, where the first number is the payoff of Adam and the second number the payoff of Eve. For example, in the Prisoners Dilemma, if Adam cooperates and Eve defects then Adam gets 65 and Eve gets 165. Prisoners Dilemma Eve Cooperate Defect Adam Cooperate 140, 140 65, 165 Defect 165,...