Skip to main content

Kindness or confusion in public good games

The linear public good game is, as I have mentioned before on this blog, the workhorse of experiments on cooperation. In the basic version of the game there is a group of, say, 4 people. Each person is given an endowment of, say, $10 and asked how much they want to contribute to a public good. Any money a person does not contribute is theirs to keep. Any money that is contributed is multiplied by some factor, say 2, and shared equally amongst group members.
         Note that for every $1 a person does not contribute they get a return of $1. But, for every $1 they do contribute they get a return of $0.50 (because the $1 is converted to $2 and then shared equally amongst the 4 group members). It follows that a person maximizes their individual payoff by contributing 0 to the public good. Contributing to the public good does, however, increase total payoffs in the group because each $1 contributed is converted to $2. For example, if everyone keeps their $10 then they each get $10. But, if everyone contributes $10 then they each get $20.
         The typical outcome in a public good experiment is as follows: Average contributions in the first round of play are around 50% of endowments. Then, with repetition, contributions slowly fall to around 20-30% of endowments. So, what does that tell us?
          The standard interpretation is to appeal to kindness. Lots of people are seemingly willing to contribute to the public good. This is evidence of cooperative behaviour. The fall in contributions can then be explained by reciprocity. Basically, those who contribute get frustrated at those who do not contribute and so lower their own contribution over time. There is no doubt that this story fits observed data. But, there is another, arguably much simpler, explanation.
          This alternative interpretation appeals to confusion. Imagine what would happen if people did not understand the experiment instructions? Then we could expect random choice in the first round which would give contributions around 50%. And then we would expect contributions to fall over time as subjects better understand the game. This also fits the pattern of observed behaviour quite well. So, how to rule out confusion?
          One approach is to come up with a game which is similar to a linear public good game in terms of instructions but where there is no role for kindness. James Andreoni, in his study Cooperation in public-goods experiments: kindness or confusion, looked at one possibility. He considered a game where subjects are ranked based on how much they contribute to the public good and then paid based on their rank, with those who contribute least getting the highest payoff. The crucial thing about this game is that it is constant-sum meaning that group payoffs are the same no matter what and so there is no possibility to be cooperative. Indeed, there is no incentive to do anything other than contribute 0. Contributions in this Rank game can, therefore, be attributed to confusion rather than kindness.  
          The graph below summarises the results. In the standard game (Regular) we see that average contributions start just over 50% and then fall to 30%. In the Rank game they start at 30% and fall to around 5%. If we interpret contributions in the Rank game as confusion then we can see that around half of contributions in the first round are confusion but confusion soon disappears as a factor.


             If the approach taken by Andreoni was beautifully subtle, then that taken by Daniel Houser and Robert Kurzban, in a follow up study Revisiting kindness and confusion in public goods experiments, was remarkably blunt. They considered a game where a person was in a group with three computers. Clearly, if the other group members are computers then there is nothing to be gained by kindness. Again, therefore, any positive contribution can be interpreted as confusion.
           The graph below summarizes the results. In this case confusion seems to play a bigger role. Most contributions in the first round seem to be confusion because there is not much difference between playing with humans or computer. Moreover, contributions take longer to fall in the Computer game than they did in the Rank game.. 


           So what should we take from this? At face value the results of both studies give cause for concern. It seems that a significant part of behaviour is driven by confusion rather than kindness, particularly in the first round. There is, therefore, a danger of over-interpreting results. Things, however, may not be as bad as this would suggest. First, the Rank game is a bit complicated and the Computer game a bit weird and so there is more scope for confusion here than in the standard public good game. For instance, subjects may not really get the point they are playing with computers.
           We can also analyse individual data in more detail to test specific theories of reciprocity. If people behave systematically relative to past history of contributions then we can be more confident that confusion is not the main thing at work. Recent studies are more in this direction. This though reiterates the need for experiments to work alongside theory. Without a theory it is much harder to distinguish confusion from systematic behaviour and confusion may be an important driver of behaviour in the lab.  

  

Comments

Popular posts from this blog

Revealed preference, WARP, SARP and GARP

The basic idea behind revealed preference is incredibly simple: we try to infer something useful about a person's preferences by observing the choices they make. The topic, however, confuses many a student and academic alike, particularly when we get on to WARP, SARP and GARP. So, let us see if we can make some sense of it all.           In trying to explain revealed preference I want to draw on a  study  by James Andreoni and John Miller published in Econometrica . They look at people's willingness to share money with another person. Specifically subjects were given questions like:  Q1. Divide 60 tokens: Hold _____ at $1 each and Pass _____ at $1 each.  In this case there were 60 tokens to split and each token was worth $1. So, for example, if they held 40 tokens and passed 20 then they would get $40 and the other person $20. Consider another question: Q2. Divide 40 tokens: Hold _____ at $1 each and Pass ______ at $3 each. In this case each token given to th

Nash bargaining solution

Following the tragic death of John Nash in May I thought it would be good to explain some of his main contributions to game theory. Where better to start than the Nash bargaining solution. This is surely one of the most beautiful results in game theory and was completely unprecedented. All the more remarkable that Nash came up with the idea at the start of his graduate studies!          The Nash solution is a 'solution' to a two-person bargaining problem . To illustrate, suppose we have Adam and Beth bargaining over how to split some surplus. If they fail to reach agreement they get payoffs €a and €b respectively. The pair (a, b) is called the disagreement point . If they agree then they can achieve any pair of payoffs within some set F of feasible payoff points . I'll give some examples later. For the problem to be interesting we need there to be some point (A, B) in F such that A > a and B > b. In other words Adam and Beth should be able to gain from agreeing.

Some estimates of price elasticity of demand

In the  textbook on Microeconomics and Behaviour with Bob Frank we have some tables giving examples of price, income and cross-price elasticities of demand. Given that most of the references are from the 70's I'm working on an update for the forthcoming 3rd edition. So, here is a brief overview of where the numbers come from for the table on price elasticity of demand. Suggestions for other good sources much appreciated. Before we get into the numbers - the disclaimer. Price elasticities are tricky things to tie down. Suppose you want the price elasticity of demand for cars. This elasticity is likely to be different for rich or poor people, people living in the city or the countryside, people in France or Germany etc.etc. You then have to think if you want the elasticity for buying a car or using a car (which includes petrol, insurance and so on). So, there is no such thing as the price elasticity of demand for cars. Moreover, the estimated price elasticity will depend o