Monday, 22 June 2015

Nash bargaining solution

Following the tragic death of John Nash in May I thought it would be good to explain some of his main contributions to game theory. Where better to start than the Nash bargaining solution. This is surely one of the most beautiful results in game theory and was completely unprecedented. All the more remarkable that Nash came up with the idea at the start of his graduate studies!
         The Nash solution is a 'solution' to a two-person bargaining problem. To illustrate, suppose we have Adam and Beth bargaining over how to split some surplus. If they fail to reach agreement they get payoffs €a and €b respectively. The pair (a, b) is called the disagreement point. If they agree then they can achieve any pair of payoffs within some set F of feasible payoff points. I'll give some examples later. For the problem to be interesting we need there to be some point (A, B) in F such that A > a and B > b. In other words Adam and Beth should be able to gain from agreeing.
          The solution is a pair of payoffs (A*, B*) that Adam and Beth should agree on. Nash gave a list of axioms he suggested that a solution should satisfy. Informally, these are: 
 
(1) Pareto efficiency: There must be no feasible point that would make both Adam and Beth better off.
 
(2) Individual rationality: Adam and Beth must do at least as well as the disagreement point.
 
(3) Scale invariance: If we do a linear transformation of payoffs, e.g. converting Adam's payoff from euros to dollars, then the solution merely needs to be transformed as well. In other words, the solution is 'the same' whether we use euros, dollars, roubles or anything else. 
 
(4) Symmetry: If Adam and Beth have symmetrical bargaining positions they should get the same payoff.
 
(5) Independence of irrelevant alternatives: If we eliminate some feasible points that were not a solution then the solution should stay the same.
 
Nash showed that there is a unique solution that satisfies all these five axioms. The solution is to find the pair A* and B* that maximize the Nash product (A - a)(B - b). This is a remarkably succinct result. It is also easy to apply. A beautiful theorem! To understand the result better we need to delve into the proof.
            Consider, first, a very simple example. Suppose that Adam and Beth are bargaining over how to split €100. If they disagree they each get €0. The feasible set is the blue triangle in the figure below. What is the solution? The Pareto efficiency axiom requires A* + B* = 100. The symmetry axiom requires A* = B*. Only one point satisfies both these axioms and that is A* = B* = €50. To see if this is the Nash product we need to maximize AB. Plugging in B = 100 - A we get A(100 - A). Maximizing this we respect to A does indeed give A* = 50 as desired.


          Given the symmetry of the first example there is nothing particularly controversial or exciting about the solution that Adam and Beth should split things equally. So, let us consider a slightly more complex example. Suppose that there are 100 tokens up for grabs. Any token given to Beth is worth €1 and a token given to Adam is worth €2. Also, suppose that if they disagree Beth gets €10 and Adam gets €30. The figure below depicts this scenario.




         Things are now asymmetric. What shall we do? We can apply the scale invariance axiom to convert the bargaining problem into one that is symmetric. It is necessary to solve some simple equations to do so. I'll illustrate with one possibility. Suppose for Beth, given any amount €B, we take off 10 and then multiply by 4/3. For Adam we take off 30 and then multiple by 2/3. Do this and we obtain the bargaining problem depicted below. This problem is symmetric and very similar to the first example. (The only difference is that negative payoffs are now possible meaning that we need to apply individual rationality).


            This new bargaining problem has a unique point satisfying Pareto efficiency and symmetry. Namely, €50 each. This means that to satisfy Pareto efficiency, symmetry and scale invariance the original problem must also have a unique solution. Moreover, we just need to reverse the scaling to find the answer. Doing so we get €105 for Adam and €47.50 for Beth. You can check that this maximizes the Nash product as desired.
           Consider a final example depicted below. This is similar to the first example except that Adam cannot get more than €40. No amount of scaling is going to make this example symmetric. So, we need a new trick. 
         Suppose we scale the respective payoffs as follows. For Beth we multiply any amount €B by 15/12 and for Adam we multiply any amount by 5/6. Then we obtain the bargaining problem depicted below. It may not be immediately clear that this has helped much. We can, however, apply independence of irrelevant alternatives. In particular, consider our first example of splitting €100 (orange dotted line). The feasible set of the current bargaining problem is a subset of the split €100 problem. Moreover, the €50 each solution to the split €100 problem is in the feasible set of this new problem. 

             So, in order to satisfy Pareto efficiency, symmetry, scale invariance and independence of irrelevant alternatives there must be a unique solution to the original problem. The solution of the split €100 bargaining problem is, of course, €50 each. Scaling this back to the original problem we get a solution of €40 for Adam and €60 for Beth. You can again check this is consistent with maximizing the Nash product.
            This last example shows that the solution of any bargaining problem can, with the use of scale invariance and independence of irrelevant alternatives, be equated with the solution of a symmetric bargaining problem. And a symmetric bargaining problem has an obvious solution. Herein lies the beauty of Nash's bargaining solution.
             The Nash solution has stood the test of time and remains in common use. Many, however, have questioned the independence of irrelevant alternatives axiom. In our third example, for instance, we might ask whether Beth could hope to do better than €60 given her strong bargaining position. The experimental evidence, I would say, is largely supportive of the axiom Nash proposed. That though is a topic for another time. 

Saturday, 13 June 2015

Is it easier to provide a threshold public good if potential contributors are poor?

A threshold (or step-level) public good is a good that would benefit members of a group but can only be provided if there are sufficient contributions to cover the cost of the good. A local community raising funds for a new community centre is one example. Flatmates trying to get together enough money to buy a new TV is another.
         There are some fundamental strategic parameters in any threshold public good game: number of group members (n), the threshold amount of money needed (T), the value of the good if provided (V), and the endowment of money that group members have available to contribute (E). Early experimental studies looked at the role of n, T and V but had little to say about E. This raised the intriguing question of whether E matters. Is it 'easier' to provide a threshold public good if group members are relatively rich or poor?
         To find out we needed to run some experiments. In two recently published papers, with Federica Alberti and Anna Stepanova, we report our findings. In this blog entry I will talk about a study with Federica Alberti published in Finanz Archiv. This study is distinguished by the fact we looked at games with a refund or money-back guarantee if contributions fall short of the threshold.
         The benchmark game in the experimental literature has n = 5, T = 125, V = 50 and E = 55. This means that if group members split the cost of providing the public good they would each have 55 - 125/5 = 30 left. Call this the endowment remainder. We compared this benchmark to games where, everything else the same, E = 30 and E = 70. This results in an endowment remainder of 5 and 45. You can think of these as games where group members are poor and rich, respectively. We also looked at two further games where the endowment remainder was 5 and 45 but T and V varied. This allowed us to check that our results are robust.  
          The figure below summarizes our overall results. You can see that there is some weak evidence of a lower success rate for the intermediate level of endowment. The differences, though, are hardly large. So, does the endowment make any difference? We claim it does.
 
 
         To see why, first note that successfully providing the public good requires coordination amongst group members. They want to contribute just enough to pay for the good. If group members  are relatively poor then it should be simple to coordinate because the good is only provided if all of them contribute most of their endowment - so there is nothing to disagree about. If group members are relatively rich then it should also be simple to coordinate because they have lots of money to spare - it is not worth disagreeing. The intermediate case may be more tricky.
          If this conjecture is correct we would expect groups to learn to coordinate when the endowment remainder is small or large. Our subjects played the game 25 times and so we can check for this. The next figure summarizes what happened in the last five plays of the game. As we would expect the success rate is now significantly higher with a low or high endowment.
 
 
          Another way to check our conjecture is to look at the variance in contributions around the threshold. The better group members are at coordinating, the lower should be the variance. The next figure summarizes the variance of contributions in the last five plays of the game. Here there is a big difference.  
 
 
         Our results, therefore, suggest that it is 'easier' for groups to provide the public good when group members are relatively poor or rich. The important caveat is that there should be sufficient time to learn how to coordinate. For an intermediate endowment group members seemingly could not coordinate no matter how much time they had. So, the endowment does matter.
          To turn this into practical guidance for fundraisers would suggest donors need to feel either critical - the poor case - or that giving will not cost them much - the rich case. It would be dangerous to enter the intermediate territory where, say, the donor is led to believe that the good could be provided without them - they are not critical - and that there donation would involve personal sacrifice - they are not rich.


Saturday, 6 June 2015

Leadership in the minimum effort game

The minimum effort (or weakest link) game is fascinating - simple, yet capable of yielding profound insight. The basic idea is that there is a group of people who individually have to decide how much effort to put into a group task. Effort is costly for the individual but beneficial for the group. Crucially, group output is determined by the minimum effort that any one group member puts in to the task. A classic example is an airline flight: If any person involved in the flight - pilot, fuel attendant, mechanic, luggage handler etc. - gets delayed, then the flight is delayed, no matter how hard others try.
           In experiments the minimum effort game is usually reduced to the matrix in Figure 1. Here subjects are asked to choose a number between 1 and 7 with the interpretation that a higher number is higher effort. Someone choosing effort 1 is guaranteed a payoff of 70. Someone who chooses 2 gets 80 if everyone else chooses 2 or more, but only gets 60 if someone in the group chooses 1. Someone who chooses 7 can get from 10 to 130 depending on the choice of others.
 
     
       Suppose the minimum choice of others is 5. What should you choose? You do best to choose 5 and get a payoff of 110. What if the minimum of others is 6? You do best to choose 6 and get payoff 120. Following this logic we can see that 'everyone choose the same number' is a Nash equilibrium. Now here are the key points: Everyone choose 1 is the worst equilibrium while everyone choose 7 is the best equilibrium; but choosing 7 is very risky because others might let you down. To return to the airline example. It is no use the pilot racing around to get the flight ready to go if the fuel attendant is having an after-lunch snooze.
         In experiments we typically observe that effort converges over time to 1 - the worst equilibrium. Most people start by choosing high effort. But it takes only one bad egg to ruin the team and that drags average effort down. This is a bad outcome! The airline is not going to be on time. So how to fix things?
          An obvious answer seemed to be leadership. So, together with Mark van Vugt and Joris Gillet we ran some experiments on leadership. The basic idea was that one person chooses first and then others follow. We reasoned that if the leader chose 7 this would signal to the others in the group to also choose 7. Problem solved!
          Things did not work out quite as well as expected. The figure below summarizes average effort over the 10 rounds subjects played the game. In a 4 player simultaneous choice version (Sim4) effort fell over time. In the leadership treatments, with either an endogenously or exogenously chosen leader (End and Exo), effort was higher but not by much. Moreover, it was no higher than we got if we just took one person out of the group (Sim3). This is not a ringing endorsement of leadership. We found that leadership failed to increase efficiency as much as expected because leaders were not bold enough. If a leader chose high effort followers responded but leaders were reluctant to choose high effort.


 
        A recent study by Selhan Garip Sahin, Catherine Eckel and Mana Komai adds a slightly different twist. The figure below shows the average contributions they observed in a 6 player version (where effort could go up to 9). They looked at leadership by example (Exemplar) and leadership by 'communication' (manager). There overall results are very similar to ours. Again, leadership merely seems to stabilize effort and stop it falling. Their results, though, suggested more blame should be placed on followers. Specifically, leader effort increased over the rounds while follower effort fell.    



        There is still a lot we can learn about leadership in the minimum effort game. The failure of leadership to push effort up to the efficient level does though clearly illustrate the difficulty of getting groups to coordinate. And note that this is not because of some social dilemma like incentive to free-ride. There is no way to free-ride in this game. The problem is one of group members trusting that others will put in high effort. Trust, it seems, does not come easy.