The linear public good game is, as I have mentioned before on this blog, the workhorse of experiments on cooperation. In the basic version of the game there is a group of, say, 4 people. Each person is given an endowment of, say, $10 and asked how much they want to contribute to a public good. Any money a person does not contribute is theirs to keep. Any money that is contributed is multiplied by some factor, say 2, and shared equally amongst group members.
Note that for every $1 a person does not contribute they get a return of $1. But, for every $1 they do contribute they get a return of $0.50 (because the $1 is converted to $2 and then shared equally amongst the 4 group members). It follows that a person maximizes their individual payoff by contributing 0 to the public good. Contributing to the public good does, however, increase total payoffs in the group because each $1 contributed is converted to $2. For example, if everyone keeps their $10 then they each get $10. But, if everyone contributes $10 then they each get $20.
The typical outcome in a public good experiment is as follows: Average contributions in the first round of play are around 50% of endowments. Then, with repetition, contributions slowly fall to around 20-30% of endowments. So, what does that tell us?
The standard interpretation is to appeal to kindness. Lots of people are seemingly willing to contribute to the public good. This is evidence of cooperative behaviour. The fall in contributions can then be explained by reciprocity. Basically, those who contribute get frustrated at those who do not contribute and so lower their own contribution over time. There is no doubt that this story fits observed data. But, there is another, arguably much simpler, explanation.
This alternative interpretation appeals to confusion. Imagine what would happen if people did not understand the experiment instructions? Then we could expect random choice in the first round which would give contributions around 50%. And then we would expect contributions to fall over time as subjects better understand the game. This also fits the pattern of observed behaviour quite well. So, how to rule out confusion?
One approach is to come up with a game which is similar to a linear public good game in terms of instructions but where there is no role for kindness. James Andreoni, in his study Cooperation in public-goods experiments: kindness or confusion, looked at one possibility. He considered a game where subjects are ranked based on how much they contribute to the public good and then paid based on their rank, with those who contribute least getting the highest payoff. The crucial thing about this game is that it is constant-sum meaning that group payoffs are the same no matter what and so there is no possibility to be cooperative. Indeed, there is no incentive to do anything other than contribute 0. Contributions in this Rank game can, therefore, be attributed to confusion rather than kindness.
The graph below summarises the results. In the standard game (Regular) we see that average contributions start just over 50% and then fall to 30%. In the Rank game they start at 30% and fall to around 5%. If we interpret contributions in the Rank game as confusion then we can see that around half of contributions in the first round are confusion but confusion soon disappears as a factor.
If the approach taken by Andreoni was beautifully subtle, then that taken by Daniel Houser and Robert Kurzban, in a follow up study Revisiting kindness and confusion in public goods experiments, was remarkably blunt. They considered a game where a person was in a group with three computers. Clearly, if the other group members are computers then there is nothing to be gained by kindness. Again, therefore, any positive contribution can be interpreted as confusion.
The graph below summarizes the results. In this case confusion seems to play a bigger role. Most contributions in the first round seem to be confusion because there is not much difference between playing with humans or computer. Moreover, contributions take longer to fall in the Computer game than they did in the Rank game..
So what should we take from this? At face value the results of both studies give cause for concern. It seems that a significant part of behaviour is driven by confusion rather than kindness, particularly in the first round. There is, therefore, a danger of over-interpreting results. Things, however, may not be as bad as this would suggest. First, the Rank game is a bit complicated and the Computer game a bit weird and so there is more scope for confusion here than in the standard public good game. For instance, subjects may not really get the point they are playing with computers.
We can also analyse individual data in more detail to test specific theories of reciprocity. If people behave systematically relative to past history of contributions then we can be more confident that confusion is not the main thing at work. Recent studies are more in this direction. This though reiterates the need for experiments to work alongside theory. Without a theory it is much harder to distinguish confusion from systematic behaviour and confusion may be an important driver of behaviour in the lab.
Comments
Post a Comment