Skip to main content

Conditional cooperation: Kindness or confusion

A recent study by Maxwell Burton-Chelle, Claire El Mouden and Stuart West, published in the Proceedings of the National Academy of Sciences, challenges one of experimental economics most robust findings. They argue that conditional cooperation reflects subjects confusion over experimental instructions and not social preferences. So, what to make of this result?
       Let me begin by providing a little background on conditional cooperation. A study by Urs Fischbacher, Simon Gachter and Ernst Fehr, published in Economics Letters 2001, looked at how people behave in a pubic good game when they get to see the contributions of others. More specifically:
        They considered a setting with 4 people. Each person could contribute up to 20 tokens into a public project. Any tokens not contributed were worth, say, $1 to that person. Any tokens contributed were worth $0.40 to everyone in the group. For example, if a person keeps 5 tokens and total contributions to the group (including her 15) are 36 then she gets $5 + 0.4(35) = $19. This is the standard linear public good setting. The slight twist is to have 3 of the people choose their contribution before the 4th person. The 4th person can then condition her contribution on the contribution observed of others. The final thing to note is that a strategy method was used meaning that subjects were asked what they would do under any eventuality; that is, what would they do if the average contribution of others was 0, or 1, or 2, and so on up to 20.
          Fischbacher, Gachter and Fehr found that around 50% of subjects were conditional co-operators. To a rough approximation these subjects contributed the same as the average contributed by others. So, if the average contribution of others was 15, they contributed 15, and so forth. This finding has been replicated many, many times, including a study of my own with Denise Lovett, published in Games. Moreover, the idea of conditional cooperation fits well with the more general behaviour we observe in public good games. For instance it can explain why contributions decline with repeated interaction - if 50% of people are free-riders and 50% conditional cooperators the average contribution naturally will fall over time.
          The latest study in PNAS questions all this. Here's what they did: Having given subjects the same instructions as those used in the original study by Fischbacher, Gachter and Fehr. Then they wrote “Before you begin, you are going to play this game in a special case. In this special case, you will be in a group of just you and the COMPUTER; The computer will pick the decisions of the other 3 players. The computer will pick their decisions randomly and separately (so each computer player will make its own random decision); You are the only real person in the group, and only you will receive any money.”
           In this special case there is absolutely no reason to contribute to the public good. If a person does contribute it merely benefits a computer, whatever that means. Yet the study found that behaviour against the computer was almost exactly the same as behaviour with people. This, it is claimed, is evidence that subjects don't understand the instructions. And conditional cooperation is an artefact of this misunderstanding. Why else would someone cooperate with a computer?
           To cut to the chase, I don't buy this argument. There are several reasons. First, hundreds, if not thousands, of experimental subjects have behaved as if conditional co-operators over the years. Surely, at least some of these understood the instructions! After all, the subjects are typically students at top universities and the instructions are not particularly difficult to understand. Even so, we cannot ignore the fact that the subjects in this study did behave weirdly when playing against computers. How can we explain that?
          There is a worrying circularity in the reasoning used by Burton-Chellew, El Mouden and West. In particular, they essentially claim that subjects cannot understand instructions about a public good game but can understand the bit of the instructions that tells them they are playing a computer. Well, public good games are ubiquitous in everyday life, while cooperating with a computer is an odd thing indeed. So, it seems to me much more plausible that subjects understood instructions about the public good game but did not react to the bit at the end telling them they were playing a computer.
           Support for this latter interpretation is provided by the observation that behaviour against the computer was so similar to that against humans. My study with Denise Lovett and others have shown that the behaviour of conditional cooperators systematically changes with incentives. This would strongly suggest that some difference should have been observed when subjects played against computers. But, no difference was observed. The easiest explanation for this seems to be that subjects did not comprehend what it meant to play a computer.
            The authors offer a counter-argument to this line of reasoning. They show that conditional cooperation correlates with 'misunderstanding of the game'. Their measure of misunderstanding is, however, open to interpretation; they asked subjects 'In the game, if a player wants to maximize his or her earnings in any one particular round, does the amount they should contribute depend on what other people in their group contribute?'. This question is carefully worded to have a unique answer - no. And I would want someone in a game theory exam to answer no. But this is not a game theory exam! Once we acknowledge that fact, this question looks more like a measure of free-riding than understanding. Indeed, the many conversations I have had with students and experimental subjects, on this kind of issue, would suggest that those answering yes or maybe are likely to understand the game better than those answering no - they just need a bit of training before sitting a game theory exam.
            Clearly this latest study challenges the economist's interpretation of conditional cooperation and has to be taken seriously. But, I think the evidence is not compelling enough to ditch 15 years of accumulated evidence quite yet. Clearly, some conditional cooperation will be down to confusion. The claim that 100% is due to confusion, however, seems extreme. Hopefully, future work can narrow down the estimate. 

Comments

Popular posts from this blog

Revealed preference, WARP, SARP and GARP

The basic idea behind revealed preference is incredibly simple: we try to infer something useful about a person's preferences by observing the choices they make. The topic, however, confuses many a student and academic alike, particularly when we get on to WARP, SARP and GARP. So, let us see if we can make some sense of it all.           In trying to explain revealed preference I want to draw on a  study  by James Andreoni and John Miller published in Econometrica . They look at people's willingness to share money with another person. Specifically subjects were given questions like:  Q1. Divide 60 tokens: Hold _____ at $1 each and Pass _____ at $1 each.  In this case there were 60 tokens to split and each token was worth $1. So, for example, if they held 40 tokens and passed 20 then they would get $40 and the other person $20. Consider another question: Q2. D...

Nash bargaining solution

Following the tragic death of John Nash in May I thought it would be good to explain some of his main contributions to game theory. Where better to start than the Nash bargaining solution. This is surely one of the most beautiful results in game theory and was completely unprecedented. All the more remarkable that Nash came up with the idea at the start of his graduate studies!          The Nash solution is a 'solution' to a two-person bargaining problem . To illustrate, suppose we have Adam and Beth bargaining over how to split some surplus. If they fail to reach agreement they get payoffs €a and €b respectively. The pair (a, b) is called the disagreement point . If they agree then they can achieve any pair of payoffs within some set F of feasible payoff points . I'll give some examples later. For the problem to be interesting we need there to be some point (A, B) in F such that A > a and B > b. In...

Prisoners dilemma or stag hunt

Over Christmas I had chance to read The Stag Hunt and the Evolution of Social Structure by Brian Skyrms. A nice read, very interesting and thought provoking. There’s a couple of things in the book that prompt further discussion. The one I want to focus on in this post is the distinction between the stag hunt game and the prisoners dilemma game.    To be sure what we are talking about, here is a specific version of both type of game. Adam and Eve independently need to decide whether to cooperate or defect. The payoff matrix details their payoff for any combination of choices, where the first number is the payoff of Adam and the second number the payoff of Eve. For example, in the Prisoners Dilemma, if Adam cooperates and Eve defects then Adam gets 65 and Eve gets 165. Prisoners Dilemma Eve Cooperate Defect Adam Cooperate 140, 140 65, 165 Defect 165,...