Wednesday, 8 November 2017

Rank dependent expected utility

Prospect theory is most well known for its assumption that gains are treated differently to losses. Another crucial part of the theory, namely that probabilities are weighted, typically attracts much less attention. Recent evidence, however, is suggesting that probability weighting has a crucial role to play in many applied settings. So, what is probability weighting and why does it matter?

The basic idea of probability weighting is that people tend to overestimate the likelihood of events that happen with small probability and underestimate the likelihood of events that happen with medium to large probability. In their famous paper on 'Advances in prospect theory', Amos Tversky and Daniel Kahneman quantified this effect. They fitted experiment data to equation


where γ is a parameter to be estimated. In interpretation, p is the actual probability and π(p) the weighted probability. The figure below summarizes the kind of effect you get. Tversky and Kahneman found that a value of γ around 0.61 best matched the data. This means that something which happens with probability 0.1 gets a decision weight of around 0.2 (overweighting of small probabilities) while something that happens with probability 0.5 gets a decision weight of only around 0.4 (underweighting of medium to large probabilities).  



Why we observe this warping of probabilities is unclear. But the consequences for choice can be important. To see why consider someone deciding whether to take on a gamble. Their choice is either to accept £10 for certain or gamble and have a 10% chance of winning £90 and a 90% chance of winning nothing. The expected value of this gamble is 0.1 x 90 = £9. So, it does not look like a good deal. But, if someone converts a 10% probability into a decision weight of 0.2 we get value 0.2 x 90 = £18. Suddenly the gamble looks great! Which might explain the appeal of lottery tickets.

There is, though, a problem. It is not enough to simple weight all probabilities. This, as I will shortly explain, doesn't work. So, we need some kind of trick. While prospect theory was around in 1979 it was not until the early 1990's that the trick was found. That trick is rank dependent weighting. The gap of over 10 years in finding a way to deal with probabilities may help explain why probability weighting has had to play second fiddle to loss aversion. Lets, though, focus on the technical details.

Consider the example. Here there are no obvious problems if we just weight probabilities. The 10% chance of winning is converted into a 0.2 decision weight while the 90% chance of losing is converted into a 0.7 decision weight. The overall expected value is then 0.2 x £90 = £18. Everything looks fine.

So, consider another example. Suppose that the sure £10 is now a gamble with a 10% chance of winning £10.09, a 10% chance of winning £10.08, a 10% chance of winning £10.07, and so on, down to a 10% chance of winning £10. If we just simply weight all these 10% probabilities as 0.2 then we get expected value of 0.2 x 10.09 + 0.2 x 10.08 + ... + 0.2 x 10 = £20.09. This is absurd. A gamble that essentially gives £10 cannot be worth over £20! You might say that the problem here is we have ended up with a combined weight of 2. If, though, we normalize weights to 1 we will not have captured the over-weighting of small probabilities. So, normalizing is not, of itself, a solution. 

The problem with the preceding approach is that we have weighted everything - good or bad - by the same amount. Rank dependent probability does away with that. Here we rank outcomes from best to worst. The decision weight we place on an outcome is then the weighted probability of the outcome or something better minus the weighted probability of something better

In our original gamble the best outcome is £90 and the worst is £0. The weight we put on £90 is around 0.2 because there is 10% chance of £90, no chance of anything better, and a 10% probability is given weight 0.2. The weight we put on £0 is 0.8 because it is the weighted probability of £0 or better, namely 1, minus the weighted probability of £90, namely 0.2. So, not much changes in this example.

In the £10 gamble the best outcome is £10.09, the next best £10.08, and so on. The decision weight we but on £10.09 is around 0.2 because there is a 10% chance of £10.09 and no chance of anything better. Crucially, the weight we put on £10.08 is only around 0.1 because we have the weighted probability of £10.08 or better, a 20% chance that gives weight around 0.3, minus the weighted probability of £10.09, around 0.2. You can verify that the chance of winning £10.07, £10.06 and so on has an even lower decision weight. Indeed, decision weights have to add to 1 and so the high weight on £10.09 is compensated by a lower weight on other outcomes. For completeness the table below gives the exact weights you would get with the Tversky and Kahneman parameters. Given that decision weights have to add to 1 the expected value is going to be around £10. Common sense restored!




Generally speaking, rank dependent weighting means that we capture, and only capture, over-weighting of the extreme outcomes. So, we capture the fact a person may be overly optimistic about winning £90 rather than £0 without picking up the perverse prediction that every unlikely event is over-weighted. The discussion so far has focused on gains but we can do the same thing with losses. Here we want to capture, and only capture, over-weighting of the worst outcomes. 

So why does all this matter? There is mounting evidence that weighting of probabilities can explain a lot of behavior, including the equity premium puzzle, long shot bias in betting and willingness of households to buy insurance at highly unfavorable premiums. For a review of the evidence see the article by Helga Fehr-Duda and Thomas Epper on 'Probability and risk: Foundations and economic implications of probability-dependent risk preferences'. It is easy to see, for instance, why overweighting of small probabilities could have potentially profound implications for someone's view of insurance. A very small probability of loss may be given a much higher decision weight. That makes insurance look like a good deal.  

Tuesday, 10 October 2017

Richard Thaler and the Nobel Prize for behavioral economics

Officially, Richard Thaler won the Nobel Prize in Economics because he 'has incorporated psychologically realistic assumptions into analyses of economic decision-making. By exploring the consequences of limited rationality, social preferences, and lack of self-control, he has shown how these human traits systematically affect individual decisions as well as market outcomes'. 

An interesting thing about this quote is that nudge doesn't get a mention; indeed, it only just about scrapes it into the Academy's official press release. (In the more detailed popular information document it doesn't appear until page 5 of 6.) This is in stark contrast to the popular press: the BBC leads with 'Nudge' economist wins Nobel Prize, the Telegraph leads with 'Nudge' guru wins the Nobel Prize, and so on. To read the papers you would think that Nudge is all there is to it.

There is no doubt that Nudge has been a huge success and made Thaler famous (at least by economist standards). In terms of the Nobel prize, however, it is important to recognize that Nudge is just one of the many, many contributions Thaler has made to economics, and behavioral economics. Let me pick up three of those contributions here.

1. Thaler showed how dumb people can be when making economic decisions. The likes of Herbert Simon, Amos Tversky and Daniel Kahneman paved the way by showing that people can make decisions that are inconsistent with the standard way economists think about things. They, though, typically considered settings that are pretty complex, such as, search, choice with risk or how to interpret information. Thaler took this one stage further and showed that even for the most basic of economic decisions the standard economic model can go astray. 

Consider, by way of illustration the following example, from the classic paper on 'mental accounting and consumer choice':

Mr. S admires a $125 cashmere sweater at the department store. He declines to buy it, feeling that it is too extravagant. Later that month he receives the same sweater from his wife for a birthday present. He is very happy. Mr. and Mrs. S have only joint bank accounts.  

Standard economic theory says that the sweater is either worth $125 or not. But, there seems nothing extraordinary about Mr. S's behavior. To provide a framework within which to make sense of this, and much else, Thaler introduced the notion of mental accounting where we code gains and losses, evaluate purchases and observe budgetary rules. Mr. S would be breaking self-imposed rules to spend $125 from his 'everyday account' but an occasional gift funded from the 'gift account' is to be enjoyed. 

Once we see how easily the framing of a choice can influence behavior it is a relatively short step to Nudge and the idea that framing can be used to positively change behavior. (Crucial in this is also the recognition that people can have self-control problems.)  

2. As well as dumb, people can also be nice, and not so nice. In many ways economists have clung to the notion of selfishness for much longer than that of rationality. Work by Thaler helped turn the tide. Two papers with Daniel Kahneman and Jack Knetsch on 'Fairness as a constraint on profit taking' and 'Fairness and the assumptions of economics' are particularly noteworthy. In the first paper we get a series of questions like the following:

A hardware store has been selling snow shovels for $15. The morning after a large snowstorm, the store raises the price to $20. Please rate this actions as: Completely fair, acceptable, unfair, very unfair.

82% of subjects considered it unfair. Presumably that means they may decide not to buy the snow shovel; fairness matters. In the second paper we get some big advances in the studying of the ultimatum game (first use of strategy method to look at willingness to reject and first look at willingness of a third party to punish) and we see the dictator game for the first time. This may sound a bit technical but it was part of opening up the whole debate on how fairness works and can be modeled by economists.

3. Popularization is not the kind of thing that wins Nobel prizes, but it can be important in driving things forward. In a series of articles published in the Journal of Economic Perspectives (and subsequently turned into the book The Winner's Curse) Thaler and co-authors set out some of the key insights of behavioral economics. I will quote in full the introduction to one of the articles:

Economics can be distinguished from other social sciences by the belief that most (all?) behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences in markets that (eventually) clear. An empirical result qualifies as an anomaly if it is difficult to "rationalize," or if implausible assumptions are necessary to explain it within the paradigm. This column presents a series of such anomalies. Readers are invited to suggest topics for future columns by sending a note with some reference to (or better yet copies of) the relevant research. Comments on anomalies printed here are also welcome. After this issue, the "Anomalies" column will no longer appear in every issue and instead will appear occasionally, when a pressing anomaly crosses Dick Thaler's desk. However, suggestions for new columns and comments on old ones are still welcome. Thaler would like to quash one rumor before it gets started, namely that he is cutting back because he has run out of anomalies. Au contraire, it is the dilemma of choosing which juicy anomaly to discuss that takes so much time.

The interesting thing about this is the target audience. This is about trying to convince economists that behavioral economics matters and should be taken seriously. That is a very hard sell indeed! But ultimately it seems to have worked.

With any Nobel prize there are going to be the critics. And I can already hear some grumbles. But, that seems to come more from ignorance than judgement. If we take Nudge out of the equation the contributions of Thaler are clear enough. With Nudge there is undeniably a lot of hyperbole from some policy makers and consultants. The undeniable truth, however, is that it has made a positive difference to policy making. That is worth celebrating. 

Thursday, 7 September 2017

Honesty around the world

In my last post I looked at dishonesty in the banking industry. Sticking with a similar theme, this time I will at dishonesty across different countries.
       Let us start with a study by David Pascual-Ezama and a long list of co-authors on 'Context dependent cheating: Experimental evidence from 16 countries'. They asked 90 students in 16 different countries to perform a very simple task: toss a black and white coin and record the outcome. If the coin came up white the student obtained a red Lindt Lindor Truffle. If it came up black they got nothing. Crucially, the coin toss took place in private and so the student could report whatever outcome they wanted. If they wanted a chocolate then they simply had to report white. (The study contrasted three different methods of reporting - form put in a box, form given to the experimenter or verbally telling the experimenter - but I will skip those details here.)
          The chart below summarizes the country wide outcomes by focusing on the proportion of the 90 students in each country that 'won' the chocolate. The blue bars give the distribution we would predict if the students reported honestly. As you would expect the distribution is centered on a 50-50 success rate. Compared to this benchmark students were remarkably lucky. In all countries more than 50% of students won the chocolate and in some, such as Spain, the success rate was much higher than seems plausible. So, some students were dishonest (and hungry). Note, however, that the success rates are nowhere near the 100% we would expect if all students lied. So, many students were honest (or not so hungry). Indeed, we could conclude that most students were honest. There is also no compelling evidence of differences across countries. Spaniards won more than Danes but then someone has to come top and someone bottom. The differences we see here are not particularly large.  


 
Consider next a study by David Hugh-Jones on 'Honesty, beliefs about honesty, and economic growth in 15 countries'. In this case the subject pool in each country was a sample of the general population selected by a survey company and the prize was either $3 or $5 and not a chocolate. (The study also involved other measures of dishonesty and beliefs about dishonesty but I'll skip those here.) The findings are summarized in the next figure. The main thing to note is that we get a big swing to the right in those who 'won'. In other words there was a lot more dishonesty in this study. Moreover, the amount of dishonesty significantly varied across countries. Just how much we can read into this variation is not clear. For instance, the US and Canada come out as relatively dishonesty but that may reflect a willingness to 'game' the experiment rather than a predisposition to dishonesty in general life. Even so, it is shown that honesty correlates with GDP per capita and the proportion of the population that is protestant. This hints at cultural roots of honesty.



Which brings us to the final study I will mention, by Simon Gachter and Jonathon Schultz on 'Intrinsic honesty and the prevalence of rule violations across countries'. In this study students from 23 countries were asked to roll a six sided dice and report the outcome. Reporting a 1 earned 1 unit of payment (e.g. £0.50 in the UK), a 2 earned 2 units and so on up to 5 which earned 5 units, but reporting a 6 earned 0. Note that in this experiment a subject can lie 'a little' by say reporting 4 instead of 2 or lie 'a lot' by reporting 5 instead of 6. If subjects were honest the expected payment would be 2.5.  If they lied a lot the payment would be 5. As the figure below shows average payments were well above 2.5 and so there is evidence of dishonesty. Note, however, that payments were well below 5 and so there is, again, lots of honesty as well.




Cross country differences are not partly stark in the figure above. But another thing to consider is the proportion of subjects who reported a 6. Recall that this meant a payoff of 0 and so there was a strong incentive to lie 'a little' and get some payoff. (Indeed, to not report a 6 would seem analogous to miss-reporting the toss of a coin.) If subjects were honest around 16% should get 0. As the figure below shows in some countries, like Germany, subjects were very honest but in others, like Tanzania, they were not. And evidence for differences across countries is pretty strong. Overall, it is shown that cross country differences correlate strongly with an index of the prevalence of rule violations which captures things like corruption, tax evasion and fraudulent politics. This again points the finger at culture but also brings in the related issue of institutions. Countries with weak institutions see more dishonesty.


So, what to take from all this? One message I would take is that people are, on average, very honest. In all the three studies I have discussed there was more subjects behaved honestly than dishonestly. And let's recall that it was pretty easy for a subject to lie in these studies, both in a practical sense - they just needed to misreport - and in a moral sense - this was not robbing money from an old lady. It seems, therefore, that people the world over are pretty honest. But, that does not mean that dishonesty is not a problem. In my last post we saw that culture in the banking industry might encourage dishonesty. Here we see that culture in society might lead to greater dishonesty. A little bit of dishonesty can have large negative economic consequences.  

Sunday, 27 August 2017

Culture and dishonesty in banking

The film 'A Good Year' starts with a ruthless financial trader called Max, played by Russell Crowe, manipulating bond markets in order to out-maneuver his competitors and make a quick, big profit. But, by the end of the film Max has decided to pack it all in and live out a more fulfilling life in rural France. Could that happen? Can someone really transition from a ruthless, selfish trader to a compassionate, loving family man in the space of a few days?
        A study by Alain Cohn, Ernst Fehr and Michel Marechal, publisehd in 2014 in Nature, suggests it might be possible. They used a standard coin tossing task to measure the dishonesty of 128 employees from a large, international bank. The task works as follows: A subject is asked to toss a coin 10 times and record whether the outcome was heads or tails. Depending on the outcome the subject can win $20 per toss. The crucial thing to know is that the subject records whether or not they won for each toss and there is no way for the experimenter to verify if the outcome is recorded correctly. So, the subject fills in the following table privately. This means a subject could 'easily' lie and walk away with $200.


        The crucial twist in the experiment was to vary the priming subjects faced before performing the coin-tossing task. Roughly half of the subjects were asked questions related to their work in the bank - Why did you decide to become a bank employee? What are the three major advantages of your occupation as a bank employee? Which three characteristics of your personality do you think are typical for a bank employee? etc. The other half of the subjects were asked questions not related to their work - What is your favorite leisure activity? Where did you spend your last vacation? Which three things did you like most about your last vacation? etc. 
        So, to the results. The figure below shows what happens for subjects not primed to think about work in the bank. The blue bars show the observed distribution of earnings and the green bars show the distribution of earnings expected by pure chance. We can see some hints of dishonesty - there are fewer than we would expect getting $40 or less and more getting $200. But, these are small things. The overall picture is that the bankers were honest.   


         Things change when subjects were primed to think about work in the bank. The distributions are shown below. Here we see a sizable increase in the amount of money being claimed. Needless to say, this is highly unlikely to be due to chance. It can be estimated that around 26% of subjects were dishonesty. Let us keep in perspective that this means 74% were honest. Even so, the headline result is that bankers only exhibit dishonesty when they are primed to think about banking.


        This finding feeds into a general debate about whether dishonesty is a personal trait or a product of culture. The results we have looked at here suggest that dishonesty has a large cultural component. That would make it more likely a banker can be ruthless in his job and then help old ladies across the road in his spare time. It is hard to imagine, however, that culture is the only factor at work here because we do know that there are reliable personal differences in dishonesty and willingness to cooperate. It is surely not by chance that some become investment bankers and others pediatricians? An interesting and closely related debate is whether studying economics makes people more selfish (culture at play) or whether more selfish people choose to study economics (personal traits at play). An article by Adam Grant provides a nice overview of the issues.  

Friday, 28 July 2017

Risk aversion or loss aversion

Suppose you offer someone called Albert a gamble - if the toss of a coin comes up heads then you pay him £100 and if it comes up tails he pays you £100. The evidence suggests that most people will not take on that gamble. If Albert also turns down the gamble, what does that tell you about Albert's preferences?
         One thing we can conclude is that Albert is risk averse. In particular, the gamble was fair because Albert's expected payoff was 0 and, by definition, if someone turns down a fair gamble then they are exhibiting risk aversion. It is hard to argue with a definition and so we can conclude that Albert is risk averse. The more interesting question is why he displays risk aversion?
           The micro-economic textbook would tell us that it is because of diminishing marginal utility from money. A diagram helps explain the logic. Suppose that Albert has the utility function for money depicted below. In this specific case I have set the utility of £m as the square root of m. Notice that the utility function is concave in the sense that it gets flatter for larger amounts of money. This is diminishing marginal utility of money - the more money Albert has the less he values more. 
          Suppose that Albert has £500. If he does not take the gamble then his utility is 22.36. If he takes the gamble then he can end up with either £400 or £600. The former gives him utility 20 and the latter 24.49. The expected utility is midway between this, i.e. 22.25. Crucially the expected utility of the gamble, 22.25, is less than not taking on the gamble, 22.36, and so Albert does not gamble. As the bottom figure shows we get this result because the utility function is concave. That means the utility of not gambling - on the blue line - lies above the expected utility of gambling - on the red line.  


  

There is though a problem with this standard story, formally demonstrated by Matthew Rabin in a 2000, Econometrica paper 'Risk aversion and expected utility theory: A calibration theory'. If Albert is risk averse over a relatively small sum of money like 100 with an initial wealth of £500 then he would have to be unbelievable risk averse over large gambles. Basically, he would never leave his front door. If diminishing marginal utility of money is not the explanation for Albert's risk aversion then what is?
           The most likely culprit is loss aversion. Now we have to evaluate outcomes relative to a reference point rather than a utility function over wealth. It seems natural to think that Albert's reference point is £500. That would mean winning the gamble is a gain of £100 and losing the gamble is a loss of £100. Crucially, the evidence suggests that people typically interpret a loss as worse than a gain is good. This is shown in the next figure.
              In this case everything is judged relative to the status quo of £500. Having more than £500 is a gain and less than £500 is a loss. The steeper value function below £500 captures loss aversion. The crucial thing to observe is that loss aversion effortlessly gives concavity of the value function around the status quo. So, Albert would prefer to not gamble and have value 0 than to gamble and have either +5 or -10 with an expected value of -2.5. Loss aversion has no problem explaining risk aversion over small gambles.  




So, what can we conclude from all this? The first thing to tie down is the definition of risk aversion. Standard textbooks will tell you that risk aversion is turning down a fair gamble. That to me seems like a fine definition. So far, so good. Confusion (including in academic circles) can then come from interpreting what that tells us. Generations of economists have been educated to think that risk aversion means diminishing marginal utility of money. It need not. We have seen that loss aversion can also cause risk aversion. And there are other things, like weighting of probabilities that can also cause risk aversion. It is important, therefore, to consider different possible causes of risk aversion.
           And it is also important to recognize that there is unlikely to be some unified explanation for all risk aversion. We know that people do have diminishing marginal utility of money over big sums of money. We know people are loss averse over surprising small amounts of money. We also know that people are bad at dealing with probabilities. All of these factors should be put in the mix. So, Albert might buy house insurance because of diminishing marginal utility of money, not try the new cafe for lunch because he is loss averse, and buy a lottery ticket because he overweights small probabilities. 

Tuesday, 9 May 2017

Will a vote for Theresa May strengthen her bargaining hand?

As the run-up to the UK's snap general election continues, the Conservative party appear content to talk about one thing and one thing only - strong and stable leadership for Brexit negotiations. Throughout the campaign Theresa May has been particularly keen to claim that 'every vote for me strengthens my hand in the Brexit negotiations'. This claim seems to be going down well with voters. But does it make any sense?
          In bargaining theory the disagreement point is of critical importance. In the Brexit negotiations we can think of the disagreement point as the outcome if no deal is done between the UK and the EU and so the UK simply leaves the EU in March 2019 and starts from scratch. Most experts seem to agree that no deal would be bad - very bad for the UK and bad for the EU. That means that a deal is essential. It also means that the UK starts from a bad negotiating position. 
        To put some analysis to this consider the figure below. This plots the payoff of the EU and payoff of the UK depending on what deal is done. The blue line captures all the possible outcomes from a deal - some deals better for the UK and some for the EU. The bottom red dot captures the outcome if no deal is done. Note that if no deal is done then payoffs are well below the blue line - an agreement is good. Also, if no deal is done then the UK loses more than the EU - this puts the UK in a bad negotiating position. 

  
           In a world of calm deliberation the EU and UK could easily come to an agreement that is better than no deal. But, unfortunately, some Brexiters seem to have got over excited by the referendum win and started to believe their own rhetoric. In  particular, they are claiming that no deal is not that bad. This is encapsulated by Theresa May's claim that 'no deal is better than a bad deal'. This statement is either a tautology or a claim that no deal may be relatively good. Brexiters are also fond of claiming that no deal would be worse for the EU than the UK. So, returning to the figure, let the red Eurosceptic dot represent the 'optimistic' stance of Brexiters.
            Before she called an election, Theresa May had a small majority in parliament. That means it was going to be difficult to get anything through parliament that the eurosceptics did not like. And there is not much room for maneuver if you want do a deal better than the eurosceptic initial belief. Note, however, that this strengthened Theresa May's hand quite a lot. In particular, European politicians seemed keenly aware that it was going to be difficult to get any deal through the UK parliament. This means that they might have reluctantly had to make some concessions.
          What if Mrs May has a huge majority in parliament? Well, then anything will get through parliament and so we revert back to the actual disagreement point. The bigger the majority, therefore, the weaker is the UK's position. Ultimately, things are not so bad, because a Conservative majority makes it more likely a deal can be done. Indeed, Theresa May presumably called an election because it was becoming increasingly clear that Conservative backbenchers were going to make life very tough. This made no deal more likely.
         The trouble is, the rhetoric of the Brexiters seems to have no bound. This rhetoric is not convincing anyone in Europe but is being lapped up by much of the British press and public. If we go into these negotiations with a public who think the initial position is the top eurosceptic red dot then it may be difficult for any prime-minister, no matter how big the majority, to sell a deal. In other words, Britain seems to be walking into a cul-de-sac of disaster. The only crumb of comfort is that the UK economy seems to be showing the signs of Brexit. That may concentrate minds.     

Tuesday, 28 March 2017

Brexit and the Condorcet Paradox

Tomorrow the government will trigger Article 50 and start the formal process of getting the UK out of the EU. So, how did we get in this mess in the first place? I think the Condorcet Paradox provides an interesting angle on the problem. In particular, I want to look at preferences for Remain versus Soft Brexit, i.e. leave the EU but still remain in the single market or other collaborations centered on the EU, and Hard Brexit, i.e. walk completely away from the EU. 
          The one thing we know for sure is that in the referendum last June around 52% of people voted Leave and 48% voted Remain. What does that tell us? In my recollection the referendum campaign primarily focused on the question of Soft Brexit versus Remain. No doubt some would disagree with that. But things like the customs union only started being talked about after the vote. Instead we heard a lot during the campaign about the Norway or Swiss model of Soft Brexit. True the Leave camp made promises like 'take back control of our borders' that inevitably mean hard Brexit. But, the Leave camp was far less pro-active in actually joining the dots and saying what hard Brexit would mean. The referendum vote tells us, therefore, that the British people prefer Soft Brexit to Remain.
         Once Theresa May took power the discussion very quickly turned to focus on Soft Brexit versus Hard Brexit. Now, the Brexiters were keen to join the dots and argue that 'taking back control' inevitably meant Hard Brexit. Soft Brexit, they argue, is essentially Remain in different clothes - if we are going to leave then it has to be Hard Brexit. We have no idea how the country would vote on this issue but I think there is a fair chance the country would prefer Hard Brexit to Soft Brexit
        If the country prefers Hard Brexit to Soft Brexit and prefers Soft Brexit to Remain then you might expect they would prefer Hard Brexit to Remain. But, I would be surprised if that was the case. If the original referendum campaign had been a tussle between Hard Brexit and Remain then Remain may well have won. The vote was close enough as it was and opinion polls have consistently shown that people want to remain part of the single market. Overall, therefore, we end up with a Condorcet Paradox:

Hard Brexit beats Soft Brexit
Soft Brexit beats Remain
Remain beats Hard Brexit.

          If there is a Condorcet Paradox then it is impossible to say which option is the most preferred. Note, however, that we are set to end up with an outcome, namely Hard Brexit, that is worse than we started with, namely Remain. That does not look like a good deal! Plaudits should, however, go to the Brexiteers for their strategic opportunism. In particular, we know that whenever there is a Condorcet Paradox the outcome depends on the voting procedures. The only way those favoring Hard Brexit were going to get what they wanted was to play off Soft Brexit versus Remain and Hard Brexit versus Soft Brexit. By design or good fortune that is exactly what has happened. 
          And are we going to get a vote on the final deal, pitting Remain versus Hard Brexit? Of course not. And how long before the UK votes to rejoin the EU because Join is better than Hard Brexit? Probably, a long, long while. 

Monday, 20 March 2017

How to get rid of an incompetent manager?

In a paper, recently published in the International Journal of Game Theory, my wife and I analyze a game called a forced contribution threshold public good game. A nice way to illustrate the game is to look at the difficulties of getting rid of an incompetent manager.
         So, consider a department with n workers who all want to get rid of the manager. If they don't get rid of him then there payoff will be L. If they do get rid of him then there payoff will be H > L. But, how to get rid of him? He will only be removed if at least t or more of the workers complain to senior management. For instance, if a majority of staff need to complain then t = n/2.
        If t or more complain then the manager is removed and everyone is happy. The crucial thing, though, is what happens if less than t complain. In this case the manager will remain and any workers that did complain will face recrimination. To be specific suppose that the cost of recrimination is C. Then potential payoffs to a worker called Jack are as follows:

If t or more complain then Jack gets payoff H.
If Jack complains but not enough others do then he gets payoff L - C.
If Jack does not complain and others don't either then he gets payoff L.

Note that this game is called a 'forced' contribution game because, if the manager is removed, Jack's payoff does not depend on whether or not he complained. This contrasts with a standard threshold public good game in which those who do not contribute (i.e. complain) always have a relative advantage. Hence, there is a sense in which every worker is 'forced to contribute' if the manager is removed.
         The fear of recrimination is key to the game and going to be the potential source of inefficiency. In particular, if Jack fears that others will not complain then it is not in his interest to complain either. Hence we can obtain an inefficient equilibrium in which nobody complains and the manager carries on before. This is not good for the workers and presumably not good for the firm either. So, how can this outcome be avoided?
      In our paper we compare the predictions of three theoretical models and then report an experiment designed to test the respective predictions. Our results suggest that the workers will struggle to get rid of the manager if
This means that the threshold t should not be set too high. For instance, if a simple majority is needed to get rid of the manager, and so t = n/2, then we need H to about 25% higher than L. If less than a majority is enough then H does not need to be as high. This result would suggest that it is relatively simple to have a corporate policy that would incentivise people like Jack to complain about his manager.
           Of course, in practice there are almost certainly going to be some who will defend the manager and so things become more complex. Moreover, there are likely to be significant inertia effects. In particular, the 'better the devil you know' attitude may lead workers to underestimate the difference between H and L. Also senior managers may set t relatively high because of a desire to back managers. These are all things that will make it less likely Jack complains and more likely the incompetent manager continues. Firms, therefore, need to strike the right balance to weed out inefficiency.          

Saturday, 25 February 2017

Measuring risk aversion the Holt and Laury way

Attitudes to risk are a key ingredient in most economic decision making. It is vital, therefore, that we have some understanding of the distribution of risk preferences in the population. And ideally we need a simple way of eliciting risk preferences that can be used in the lab or field. Charles Holt and Susan Laury set out one way of doing in this in their 2002 paper 'Risk aversion and incentive effects'. While plenty of other ways of measuring risk aversion have been devised over the years I think it is safe to say that the Holt and Laury approach is the most commonly used (as the near 4000 citations to their paper testifies). 
         The basic approach taken by Holt and Laury is to offer an individual 10 choices like those in the table below. For each of the 10 choices the individual has to go for option A or option B. Most people go for option A in choice 1. And everyone should go for option B in choice 10. At some point, therefore, we expect the individual to switch from choosing option A to option B. The point at which they switch can be used as a measure of risk aversion. Someone who switches early is risk loving while someone who switches later is risk averse.



To properly measure risk aversion we do, though, need to fit choices to a utility function. This is where things get a little tricky. In the simplest case we would be able to express preferences using a constant relative risk aversion (CRRA) utility function 


where x is money. I can come back to why this is the simplest case shortly. First, let's have a quick look how it works. Suppose that someone chooses option A for choice 4. Then we can infer that

It is then a case of finding for what values of r this inequality holds. And it does for r less than or equal to around -0.15. Suppose the person chooses option B for choice 5. Then we know that

This time we get r grater than or equal to around 0.15. So, A person who switches between questions 4 and 5 has a relative risk aversion of between -0.15 and 0.15.
          Let us return now to the claim that a CRRA function keeps things simple. The dollar amounts in the choices above are small. What happens if we make them bigger? Holt and Laury tried multiplying them by 20, 50 and 90. Note that by the time we get to multiplying by 90 the amounts are up to $180 which is quite a lot of money for an experiment. If the CRRA utility function accurately describes preferences then people should behave exactly the same no matter how big the stakes. This would be ideal. Holt and Laury found, however, that people were far more likely to choose option A when the stakes were larger. Which means the CRRA function did not well describe subjects choices. 
           So, what does the rejection of CRRA mean? It tells us that just asking someone to make the 10 choices above is not enough to discern their preferences for risk. We learn what they would do for those magnitudes of money but cannot extrapolate from that to larger amounts. We cannot, for instance, say that someone is risk averse or risk loving because that person might appear risk loving for gambles over small amounts of money and risk averse for larger amounts. To fully estimate risk preferences we need to elicit choices over gambles with varying magnitudes of money.
       Despite all this, it is pretty standard to run the Holt and Laury approach at the end of experiments. The basic goal of doing so is to see if behavior in the experiment, say on public goods, correlates with attitudes to risk. Note that the simplicity of the Holt and Laury approach is a real draw here because you don't want to add something overly complicated to the end of an experiment. Care, though, is needed in interpreting results. As we have seen the Holt and Laury approach is not enough to parameterize preferences. All we can basically infer, therefore, is that one subject is relatively more or less risk averse or loving than another. This, though, is informative as a rough measure of how attitudes to risk influence behavior. Key, therefore, is to focus on relative rather than absolute comparisons.

Tuesday, 21 February 2017

Does a picture make people more cooperative

In a standard economic experiment the anonymity of subjects is paramount. This is presumably because of a fear that subjects might behave differently if they knew others were 'watching them' in some sense. In the real world, however, our actions obviously can be observed much of the time. So, it would seem important to occasionally step out of the purified environment of the standard lab experiment and see what happens when we throw anonymity in the bin.
        A couple of experiments have looked at behavior in public good games without anonymity. Let me start with the 2004 study of Mari Rege and Kjetil Telle entitled 'The impact of social approval and framing on cooperation in public good games'. As is standard, subjects had to split money between a private account and group account, where contributing to the group account is good for the group. The novelty is in how this was done.
      Each subject was given some money and two envelopes, a 'group envelope' and 'private envelope', and asked to split the money between the envelopes. In a no-approval treatment the envelopes were then put in a box and mixed up before they were opened up and the contributions read out aloud. Note that in this case full anonymity is preserved because the envelopes are mixed up. In an approval treatment, by contrast, subjects were asked to publicly open their envelopes and write the contribution on the blackboard. Here there is zero anonymity because the contribution of each subject is very public.
        Average contributions to the group account were 44.8% (of the total amount) in the no-approval treatment and 72.8% in the approval treatment. So, subjects contributed a lot more when anonymity was removed.
        Similar results were obtained by James Andreoni and Ragan Petrie in a study entitled 'Public goods experiments without confidentiality'. Here, the novelty was to have photos of subjects together with their contributions to the group account, as in the picture below. In this case contributions increased from 26.9% in the absence of photos to 48.1% with photos. Again subjects contributed a lot more when anonymity was removed.

 
         So, why does anonymity matter? A study by Anya Samek and Roman Sheremeta, entitled 'Recognizing contributors' sheds some light on this. As well as treatments with no photos and everyone's photos they had treatments in which only the lowest and only the highest contributors had their photos displayed, as in the middle picture below.


          Again, photos made a big difference, increasing average contributions from 23.4% to 44.2%. Interestingly, displaying the photos of top contributors made little difference (up to 27.8%) while displaying the photos of the lowest contributors made a big difference (up to 44.9%). This would suggest that contributions increase without anonymity because subjects dislike being the lowest contributors. So, we are talking shame rather than pride.
       What do we learn from all this? Obviously we can learn interesting things by dropping anonymity.  In particular, we have learnt that contributions to group projects may be higher when individual contributions can be identified. Indeed, in a follow paper, entitled 'When identifying contributors is costly', Samek and Sheremeta show that the mere possibility of looking up photos increases contributions. That, though, raises some tough questions. If behavior is radically different without anonymity then is it good enough to keep on churning out results based on lab experiments with complete anonymity? I don't think it is. The three studies mentioned above have shown how anonymity can be dropped without compromising scientific rigor. More of that might be good. 

Saturday, 4 February 2017

Kindness or confusion in public good games

The linear public good game is, as I have mentioned before on this blog, the workhorse of experiments on cooperation. In the basic version of the game there is a group of, say, 4 people. Each person is given an endowment of, say, $10 and asked how much they want to contribute to a public good. Any money a person does not contribute is theirs to keep. Any money that is contributed is multiplied by some factor, say 2, and shared equally amongst group members.
         Note that for every $1 a person does not contribute they get a return of $1. But, for every $1 they do contribute they get a return of $0.50 (because the $1 is converted to $2 and then shared equally amongst the 4 group members). It follows that a person maximizes their individual payoff by contributing 0 to the public good. Contributing to the public good does, however, increase total payoffs in the group because each $1 contributed is converted to $2. For example, if everyone keeps their $10 then they each get $10. But, if everyone contributes $10 then they each get $20.
         The typical outcome in a public good experiment is as follows: Average contributions in the first round of play are around 50% of endowments. Then, with repetition, contributions slowly fall to around 20-30% of endowments. So, what does that tell us?
          The standard interpretation is to appeal to kindness. Lots of people are seemingly willing to contribute to the public good. This is evidence of cooperative behaviour. The fall in contributions can then be explained by reciprocity. Basically, those who contribute get frustrated at those who do not contribute and so lower their own contribution over time. There is no doubt that this story fits observed data. But, there is another, arguably much simpler, explanation.
          This alternative interpretation appeals to confusion. Imagine what would happen if people did not understand the experiment instructions? Then we could expect random choice in the first round which would give contributions around 50%. And then we would expect contributions to fall over time as subjects better understand the game. This also fits the pattern of observed behaviour quite well. So, how to rule out confusion?
          One approach is to come up with a game which is similar to a linear public good game in terms of instructions but where there is no role for kindness. James Andreoni, in his study Cooperation in public-goods experiments: kindness or confusion, looked at one possibility. He considered a game where subjects are ranked based on how much they contribute to the public good and then paid based on their rank, with those who contribute least getting the highest payoff. The crucial thing about this game is that it is constant-sum meaning that group payoffs are the same no matter what and so there is no possibility to be cooperative. Indeed, there is no incentive to do anything other than contribute 0. Contributions in this Rank game can, therefore, be attributed to confusion rather than kindness.  
          The graph below summarises the results. In the standard game (Regular) we see that average contributions start just over 50% and then fall to 30%. In the Rank game they start at 30% and fall to around 5%. If we interpret contributions in the Rank game as confusion then we can see that around half of contributions in the first round are confusion but confusion soon disappears as a factor.


             If the approach taken by Andreoni was beautifully subtle, then that taken by Daniel Houser and Robert Kurzban, in a follow up study Revisiting kindness and confusion in public goods experiments, was remarkably blunt. They considered a game where a person was in a group with three computers. Clearly, if the other group members are computers then there is nothing to be gained by kindness. Again, therefore, any positive contribution can be interpreted as confusion.
           The graph below summarizes the results. In this case confusion seems to play a bigger role. Most contributions in the first round seem to be confusion because there is not much difference between playing with humans or computer. Moreover, contributions take longer to fall in the Computer game than they did in the Rank game.. 


           So what should we take from this? At face value the results of both studies give cause for concern. It seems that a significant part of behaviour is driven by confusion rather than kindness, particularly in the first round. There is, therefore, a danger of over-interpreting results. Things, however, may not be as bad as this would suggest. First, the Rank game is a bit complicated and the Computer game a bit weird and so there is more scope for confusion here than in the standard public good game. For instance, subjects may not really get the point they are playing with computers.
           We can also analyse individual data in more detail to test specific theories of reciprocity. If people behave systematically relative to past history of contributions then we can be more confident that confusion is not the main thing at work. Recent studies are more in this direction. This though reiterates the need for experiments to work alongside theory. Without a theory it is much harder to distinguish confusion from systematic behaviour and confusion may be an important driver of behaviour in the lab.  

  

Saturday, 7 January 2017

Schelling, Brexit and Trump: Conflict is rarely a zero-sum game

Few, if any, have contributed as much to game theory as Thomas Schelling. Or, to perhaps be more accurate, surely nobody has more powerfully shown the value of applying game theory to understand the world around us. As we reflect on Schelling's contribution to knowledge, following his death in December, I think it is particularly useful to look back on one of his less touted but fundamental observations - conflict is rarely a zero-sum game.
          To put Schelling's insight in perspective it is important to recognise that the early development of game theory was hugely influenced by zero-sum games. These are games in which total payoffs always sum to zero meaning that one player's gain must be another player's loss. Sporting and parlour games, like chess and bridge, are naturally modelled as zero-sum because they are about winning and losing. Zero-sum games also have some nice theoretical properties which mean they are particularly amenable to analysis. For this latter reason, more than any other, by the 1950's game theory was increasingly becoming the study of zero-sum games.  
           Against this background, Schelling published in 1958 an article in the Journal of Conflict Resolution entitled The strategy of conflict prospectus for a reorientation of game theory. His ground-breaking book, The Strategy of Conflict, followed in 1960. The opening paragraph of his original paper sets the scene:

On the strategy of pure conflict - the zero sum games - game theory has yielded important insight and advice. But on the strategy of action where conflict is mixed with mutual dependence - the non-zero-sum games involved in wars and threats of war, strikes, negotiations, criminal deterrence, class war, race war, price war, and blackmail; manoeuvring in a bureaucracy or in a social hierarchy or in a traffic jam; and the coercion of one's own children - traditional game-theory has not yielded comparable insight or advice.

Game theory, therefore, needed to reorient itself away from zero-sum games. A fundamental part of the argument, clear in the range of examples Schelling gave in the quote above, is that most conflict is not zero-sum.
          One of the main examples Schelling used was the cold war. The U.S. and Soviet Union were clearly in extreme conflict. But that does not mean there was not scope for mutual gain. Comparing two possible scenarios easily illustrates the point. Scenario 1: The U.S. and Soviets throw nuclear weapons at each other causing mass devastation and millions of civilian casualties. Scenario 2: The U.S. and Soviets don't fire any nuclear weapons and there are no civilian casualties. Clearly, scenario 2 is considerably better for both the U.S. and Soviets than scenario 1.
         As this example illustrates, conflict does not preclude potential gains from 'coordinating' or 'cooperating' on a mutually beneficial outcome - in this case avoiding nuclear war. Or, to quote Schelling:

These are games in which, though the element of conflict provides the dramatic interest, mutual dependence is part of the logical structure and demands some kind of collaboration or mutual accommodation - tacit, if not explicit - even if only in the avoidance of mutual disaster.

I particularly like the allusion to 'conflict provides the dramatic interest'. With that in mind let us look at Brexit and Trump.
         Brexit is a conflict between the U.K. and E.U. and is clearly not zero-sum. If trade negotiations go badly then both will suffer. If they go well then disaster can be averted. The popular press, and Brexiters, seem, however, to prefer to portray the conflict as zero-sum. Particularly telling are the arguments over whether the U.K. government should give details of its key negotiating demands. The government argues that showing its hand would weaken its bargaining position. Nonsense. This is a logic based on a zero-sum conflict like bridge, not a negotiation where mutual accommodation is essential. To quote Schelling again:

These are also games in which, though secrecy may play a strategic role, there is some essential need for the signalling of intentions and the meeting of minds.

In truth, I think the government plays along with the zero-sum narrative because it provides a convenient shield to hide the fact they don't have a plan. It is, though, interesting to see how easily the public buys the narrative.
          And so to Trump, where just about every conflict is portrayed as zero-sum. Trade with China is a zero-sum game, as is immigration from Mexico, climate change, and so on. Clearly, none of these issues are remotely zero-sum. Again, however, a narrative of pure conflict seems to go down well with a large proportion of the public. Possibly because it is easier to get excited about someone who will 'defend your interests against the enemy' rather than 'negotiate a mutually beneficial compromise'.  
          Looking back, there is no doubt that Schelling's call for a reorientation of game theory had an effect. Today, zero-sum games are considered by game theorists to be the theoretical extreme case that Schelling argued they were. It is much more likely that the prisoners dilemma or ultimatum game get the attention. Outside of academic circles, however, it would seem that Schelling's critical insight remains poorly understood. Many seemingly prefer to view conflict as zero-sum. Hopefully, we can still avert disaster.