Saturday, 25 February 2017

Measuring risk aversion the Holt and Laury way

Attitudes to risk are a key ingredient in most economic decision making. It is vital, therefore, that we have some understanding of the distribution of risk preferences in the population. And ideally we need a simple way of eliciting risk preferences that can be used in the lab or field. Charles Holt and Susan Laury set out one way of doing in this in their 2002 paper 'Risk aversion and incentive effects'. While plenty of other ways of measuring risk aversion have been devised over the years I think it is safe to say that the Holt and Laury approach is the most commonly used (as the near 4000 citations to their paper testifies). 
         The basic approach taken by Holt and Laury is to offer an individual 10 choices like those in the table below. For each of the 10 choices the individual has to go for option A or option B. Most people go for option A in choice 1. And everyone should go for option B in choice 10. At some point, therefore, we expect the individual to switch from choosing option A to option B. The point at which they switch can be used as a measure of risk aversion. Someone who switches early is risk loving while someone who switches later is risk averse.



To properly measure risk aversion we do, though, need to fit choices to a utility function. This is where things get a little tricky. In the simplest case we would be able to express preferences using a constant relative risk aversion (CRRA) utility function 


where x is money. I can come back to why this is the simplest case shortly. First, let's have a quick look how it works. Suppose that someone chooses option A for choice 4. Then we can infer that

It is then a case of finding for what values of r this inequality holds. And it does for r less than or equal to around -0.15. Suppose the person chooses option B for choice 5. Then we know that

This time we get r grater than or equal to around 0.15. So, A person who switches between questions 4 and 5 has a relative risk aversion of between -0.15 and 0.15.
          Let us return now to the claim that a CRRA function keeps things simple. The dollar amounts in the choices above are small. What happens if we make them bigger? Holt and Laury tried multiplying them by 20, 50 and 90. Note that by the time we get to multiplying by 90 the amounts are up to $180 which is quite a lot of money for an experiment. If the CRRA utility function accurately describes preferences then people should behave exactly the same no matter how big the stakes. This would be ideal. Holt and Laury found, however, that people were far more likely to choose option A when the stakes were larger. Which means the CRRA function did not well describe subjects choices. 
           So, what does the rejection of CRRA mean? It tells us that just asking someone to make the 10 choices above is not enough to discern their preferences for risk. We learn what they would do for those magnitudes of money but cannot extrapolate from that to larger amounts. We cannot, for instance, say that someone is risk averse or risk loving because that person might appear risk loving for gambles over small amounts of money and risk averse for larger amounts. To fully estimate risk preferences we need to elicit choices over gambles with varying magnitudes of money.
       Despite all this, it is pretty standard to run the Holt and Laury approach at the end of experiments. The basic goal of doing so is to see if behavior in the experiment, say on public goods, correlates with attitudes to risk. Note that the simplicity of the Holt and Laury approach is a real draw here because you don't want to add something overly complicated to the end of an experiment. Care, though, is needed in interpreting results. As we have seen the Holt and Laury approach is not enough to parameterize preferences. All we can basically infer, therefore, is that one subject is relatively more or less risk averse or loving than another. This, though, is informative as a rough measure of how attitudes to risk influence behavior. Key, therefore, is to focus on relative rather than absolute comparisons.

Tuesday, 21 February 2017

Does a picture make people more cooperative

In a standard economic experiment the anonymity of subjects is paramount. This is presumably because of a fear that subjects might behave differently if they knew others were 'watching them' in some sense. In the real world, however, our actions obviously can be observed much of the time. So, it would seem important to occasionally step out of the purified environment of the standard lab experiment and see what happens when we throw anonymity in the bin.
        A couple of experiments have looked at behavior in public good games without anonymity. Let me start with the 2004 study of Mari Rege and Kjetil Telle entitled 'The impact of social approval and framing on cooperation in public good games'. As is standard, subjects had to split money between a private account and group account, where contributing to the group account is good for the group. The novelty is in how this was done.
      Each subject was given some money and two envelopes, a 'group envelope' and 'private envelope', and asked to split the money between the envelopes. In a no-approval treatment the envelopes were then put in a box and mixed up before they were opened up and the contributions read out aloud. Note that in this case full anonymity is preserved because the envelopes are mixed up. In an approval treatment, by contrast, subjects were asked to publicly open their envelopes and write the contribution on the blackboard. Here there is zero anonymity because the contribution of each subject is very public.
        Average contributions to the group account were 44.8% (of the total amount) in the no-approval treatment and 72.8% in the approval treatment. So, subjects contributed a lot more when anonymity was removed.
        Similar results were obtained by James Andreoni and Ragan Petrie in a study entitled 'Public goods experiments without confidentiality'. Here, the novelty was to have photos of subjects together with their contributions to the group account, as in the picture below. In this case contributions increased from 26.9% in the absence of photos to 48.1% with photos. Again subjects contributed a lot more when anonymity was removed.

 
         So, why does anonymity matter? A study by Anya Samek and Roman Sheremeta, entitled 'Recognizing contributors' sheds some light on this. As well as treatments with no photos and everyone's photos they had treatments in which only the lowest and only the highest contributors had their photos displayed, as in the middle picture below.


          Again, photos made a big difference, increasing average contributions from 23.4% to 44.2%. Interestingly, displaying the photos of top contributors made little difference (up to 27.8%) while displaying the photos of the lowest contributors made a big difference (up to 44.9%). This would suggest that contributions increase without anonymity because subjects dislike being the lowest contributors. So, we are talking shame rather than pride.
       What do we learn from all this? Obviously we can learn interesting things by dropping anonymity.  In particular, we have learnt that contributions to group projects may be higher when individual contributions can be identified. Indeed, in a follow paper, entitled 'When identifying contributors is costly', Samek and Sheremeta show that the mere possibility of looking up photos increases contributions. That, though, raises some tough questions. If behavior is radically different without anonymity then is it good enough to keep on churning out results based on lab experiments with complete anonymity? I don't think it is. The three studies mentioned above have shown how anonymity can be dropped without compromising scientific rigor. More of that might be good. 

Saturday, 4 February 2017

Kindness or confusion in public good games

The linear public good game is, as I have mentioned before on this blog, the workhorse of experiments on cooperation. In the basic version of the game there is a group of, say, 4 people. Each person is given an endowment of, say, $10 and asked how much they want to contribute to a public good. Any money a person does not contribute is theirs to keep. Any money that is contributed is multiplied by some factor, say 2, and shared equally amongst group members.
         Note that for every $1 a person does not contribute they get a return of $1. But, for every $1 they do contribute they get a return of $0.50 (because the $1 is converted to $2 and then shared equally amongst the 4 group members). It follows that a person maximizes their individual payoff by contributing 0 to the public good. Contributing to the public good does, however, increase total payoffs in the group because each $1 contributed is converted to $2. For example, if everyone keeps their $10 then they each get $10. But, if everyone contributes $10 then they each get $20.
         The typical outcome in a public good experiment is as follows: Average contributions in the first round of play are around 50% of endowments. Then, with repetition, contributions slowly fall to around 20-30% of endowments. So, what does that tell us?
          The standard interpretation is to appeal to kindness. Lots of people are seemingly willing to contribute to the public good. This is evidence of cooperative behaviour. The fall in contributions can then be explained by reciprocity. Basically, those who contribute get frustrated at those who do not contribute and so lower their own contribution over time. There is no doubt that this story fits observed data. But, there is another, arguably much simpler, explanation.
          This alternative interpretation appeals to confusion. Imagine what would happen if people did not understand the experiment instructions? Then we could expect random choice in the first round which would give contributions around 50%. And then we would expect contributions to fall over time as subjects better understand the game. This also fits the pattern of observed behaviour quite well. So, how to rule out confusion?
          One approach is to come up with a game which is similar to a linear public good game in terms of instructions but where there is no role for kindness. James Andreoni, in his study Cooperation in public-goods experiments: kindness or confusion, looked at one possibility. He considered a game where subjects are ranked based on how much they contribute to the public good and then paid based on their rank, with those who contribute least getting the highest payoff. The crucial thing about this game is that it is constant-sum meaning that group payoffs are the same no matter what and so there is no possibility to be cooperative. Indeed, there is no incentive to do anything other than contribute 0. Contributions in this Rank game can, therefore, be attributed to confusion rather than kindness.  
          The graph below summarises the results. In the standard game (Regular) we see that average contributions start just over 50% and then fall to 30%. In the Rank game they start at 30% and fall to around 5%. If we interpret contributions in the Rank game as confusion then we can see that around half of contributions in the first round are confusion but confusion soon disappears as a factor.


             If the approach taken by Andreoni was beautifully subtle, then that taken by Daniel Houser and Robert Kurzban, in a follow up study Revisiting kindness and confusion in public goods experiments, was remarkably blunt. They considered a game where a person was in a group with three computers. Clearly, if the other group members are computers then there is nothing to be gained by kindness. Again, therefore, any positive contribution can be interpreted as confusion.
           The graph below summarizes the results. In this case confusion seems to play a bigger role. Most contributions in the first round seem to be confusion because there is not much difference between playing with humans or computer. Moreover, contributions take longer to fall in the Computer game than they did in the Rank game.. 


           So what should we take from this? At face value the results of both studies give cause for concern. It seems that a significant part of behaviour is driven by confusion rather than kindness, particularly in the first round. There is, therefore, a danger of over-interpreting results. Things, however, may not be as bad as this would suggest. First, the Rank game is a bit complicated and the Computer game a bit weird and so there is more scope for confusion here than in the standard public good game. For instance, subjects may not really get the point they are playing with computers.
           We can also analyse individual data in more detail to test specific theories of reciprocity. If people behave systematically relative to past history of contributions then we can be more confident that confusion is not the main thing at work. Recent studies are more in this direction. This though reiterates the need for experiments to work alongside theory. Without a theory it is much harder to distinguish confusion from systematic behaviour and confusion may be an important driver of behaviour in the lab.