Friday, 28 July 2017

Risk aversion or loss aversion

Suppose you offer someone called Albert a gamble - if the toss of a coin comes up heads then you pay him £100 and if it comes up tails he pays you £100. The evidence suggests that most people will not take on that gamble. If Albert also turns down the gamble, what does that tell you about Albert's preferences?
         One thing we can conclude is that Albert is risk averse. In particular, the gamble was fair because Albert's expected payoff was 0 and, by definition, if someone turns down a fair gamble then they are exhibiting risk aversion. It is hard to argue with a definition and so we can conclude that Albert is risk averse. The more interesting question is why he displays risk aversion?
           The micro-economic textbook would tell us that it is because of diminishing marginal utility from money. A diagram helps explain the logic. Suppose that Albert has the utility function for money depicted below. In this specific case I have set the utility of £m as the square root of m. Notice that the utility function is concave in the sense that it gets flatter for larger amounts of money. This is diminishing marginal utility of money - the more money Albert has the less he values more. 
          Suppose that Albert has £500. If he does not take the gamble then his utility is 22.36. If he takes the gamble then he can end up with either £400 or £600. The former gives him utility 20 and the latter 24.49. The expected utility is midway between this, i.e. 22.25. Crucially the expected utility of the gamble, 22.25, is less than not taking on the gamble, 22.36, and so Albert does not gamble. As the bottom figure shows we get this result because the utility function is concave. That means the utility of not gambling - on the blue line - lies above the expected utility of gambling - on the red line.  


  

There is though a problem with this standard story, formally demonstrated by Matthew Rabin in a 2000, Econometrica paper 'Risk aversion and expected utility theory: A calibration theory'. If Albert is risk averse over a relatively small sum of money like 100 with an initial wealth of £500 then he would have to be unbelievable risk averse over large gambles. Basically, he would never leave his front door. If diminishing marginal utility of money is not the explanation for Albert's risk aversion then what is?
           The most likely culprit is loss aversion. Now we have to evaluate outcomes relative to a reference point rather than a utility function over wealth. It seems natural to think that Albert's reference point is £500. That would mean winning the gamble is a gain of £100 and losing the gamble is a loss of £100. Crucially, the evidence suggests that people typically interpret a loss as worse than a gain is good. This is shown in the next figure.
              In this case everything is judged relative to the status quo of £500. Having more than £500 is a gain and less than £500 is a loss. The steeper value function below £500 captures loss aversion. The crucial thing to observe is that loss aversion effortlessly gives concavity of the value function around the status quo. So, Albert would prefer to not gamble and have value 0 than to gamble and have either +5 or -10 with an expected value of -2.5. Loss aversion has no problem explaining risk aversion over small gambles.  




So, what can we conclude from all this? The first thing to tie down is the definition of risk aversion. Standard textbooks will tell you that risk aversion is turning down a fair gamble. That to me seems like a fine definition. So far, so good. Confusion (including in academic circles) can then come from interpreting what that tells us. Generations of economists have been educated to think that risk aversion means diminishing marginal utility of money. It need not. We have seen that loss aversion can also cause risk aversion. And there are other things, like weighting of probabilities that can also cause risk aversion. It is important, therefore, to consider different possible causes of risk aversion.
           And it is also important to recognize that there is unlikely to be some unified explanation for all risk aversion. We know that people do have diminishing marginal utility of money over big sums of money. We know people are loss averse over surprising small amounts of money. We also know that people are bad at dealing with probabilities. All of these factors should be put in the mix. So, Albert might buy house insurance because of diminishing marginal utility of money, not try the new cafe for lunch because he is loss averse, and buy a lottery ticket because he overweights small probabilities. 

Tuesday, 9 May 2017

Will a vote for Theresa May strengthen her bargaining hand?

As the run-up to the UK's snap general election continues, the Conservative party appear content to talk about one thing and one thing only - strong and stable leadership for Brexit negotiations. Throughout the campaign Theresa May has been particularly keen to claim that 'every vote for me strengthens my hand in the Brexit negotiations'. This claim seems to be going down well with voters. But does it make any sense?
          In bargaining theory the disagreement point is of critical importance. In the Brexit negotiations we can think of the disagreement point as the outcome if no deal is done between the UK and the EU and so the UK simply leaves the EU in March 2019 and starts from scratch. Most experts seem to agree that no deal would be bad - very bad for the UK and bad for the EU. That means that a deal is essential. It also means that the UK starts from a bad negotiating position. 
        To put some analysis to this consider the figure below. This plots the payoff of the EU and payoff of the UK depending on what deal is done. The blue line captures all the possible outcomes from a deal - some deals better for the UK and some for the EU. The bottom red dot captures the outcome if no deal is done. Note that if no deal is done then payoffs are well below the blue line - an agreement is good. Also, if no deal is done then the UK loses more than the EU - this puts the UK in a bad negotiating position. 

  
           In a world of calm deliberation the EU and UK could easily come to an agreement that is better than no deal. But, unfortunately, some Brexiters seem to have got over excited by the referendum win and started to believe their own rhetoric. In  particular, they are claiming that no deal is not that bad. This is encapsulated by Theresa May's claim that 'no deal is better than a bad deal'. This statement is either a tautology or a claim that no deal may be relatively good. Brexiters are also fond of claiming that no deal would be worse for the EU than the UK. So, returning to the figure, let the red Eurosceptic dot represent the 'optimistic' stance of Brexiters.
            Before she called an election, Theresa May had a small majority in parliament. That means it was going to be difficult to get anything through parliament that the eurosceptics did not like. And there is not much room for maneuver if you want do a deal better than the eurosceptic initial belief. Note, however, that this strengthened Theresa May's hand quite a lot. In particular, European politicians seemed keenly aware that it was going to be difficult to get any deal through the UK parliament. This means that they might have reluctantly had to make some concessions.
          What if Mrs May has a huge majority in parliament? Well, then anything will get through parliament and so we revert back to the actual disagreement point. The bigger the majority, therefore, the weaker is the UK's position. Ultimately, things are not so bad, because a Conservative majority makes it more likely a deal can be done. Indeed, Theresa May presumably called an election because it was becoming increasingly clear that Conservative backbenchers were going to make life very tough. This made no deal more likely.
         The trouble is, the rhetoric of the Brexiters seems to have no bound. This rhetoric is not convincing anyone in Europe but is being lapped up by much of the British press and public. If we go into these negotiations with a public who think the initial position is the top eurosceptic red dot then it may be difficult for any prime-minister, no matter how big the majority, to sell a deal. In other words, Britain seems to be walking into a cul-de-sac of disaster. The only crumb of comfort is that the UK economy seems to be showing the signs of Brexit. That may concentrate minds.     

Tuesday, 28 March 2017

Brexit and the Condorcet Paradox

Tomorrow the government will trigger Article 50 and start the formal process of getting the UK out of the EU. So, how did we get in this mess in the first place? I think the Condorcet Paradox provides an interesting angle on the problem. In particular, I want to look at preferences for Remain versus Soft Brexit, i.e. leave the EU but still remain in the single market or other collaborations centered on the EU, and Hard Brexit, i.e. walk completely away from the EU. 
          The one thing we know for sure is that in the referendum last June around 52% of people voted Leave and 48% voted Remain. What does that tell us? In my recollection the referendum campaign primarily focused on the question of Soft Brexit versus Remain. No doubt some would disagree with that. But things like the customs union only started being talked about after the vote. Instead we heard a lot during the campaign about the Norway or Swiss model of Soft Brexit. True the Leave camp made promises like 'take back control of our borders' that inevitably mean hard Brexit. But, the Leave camp was far less pro-active in actually joining the dots and saying what hard Brexit would mean. The referendum vote tells us, therefore, that the British people prefer Soft Brexit to Remain.
         Once Theresa May took power the discussion very quickly turned to focus on Soft Brexit versus Hard Brexit. Now, the Brexiters were keen to join the dots and argue that 'taking back control' inevitably meant Hard Brexit. Soft Brexit, they argue, is essentially Remain in different clothes - if we are going to leave then it has to be Hard Brexit. We have no idea how the country would vote on this issue but I think there is a fair chance the country would prefer Hard Brexit to Soft Brexit
        If the country prefers Hard Brexit to Soft Brexit and prefers Soft Brexit to Remain then you might expect they would prefer Hard Brexit to Remain. But, I would be surprised if that was the case. If the original referendum campaign had been a tussle between Hard Brexit and Remain then Remain may well have won. The vote was close enough as it was and opinion polls have consistently shown that people want to remain part of the single market. Overall, therefore, we end up with a Condorcet Paradox:

Hard Brexit beats Soft Brexit
Soft Brexit beats Remain
Remain beats Hard Brexit.

          If there is a Condorcet Paradox then it is impossible to say which option is the most preferred. Note, however, that we are set to end up with an outcome, namely Hard Brexit, that is worse than we started with, namely Remain. That does not look like a good deal! Plaudits should, however, go to the Brexiteers for their strategic opportunism. In particular, we know that whenever there is a Condorcet Paradox the outcome depends on the voting procedures. The only way those favoring Hard Brexit were going to get what they wanted was to play off Soft Brexit versus Remain and Hard Brexit versus Soft Brexit. By design or good fortune that is exactly what has happened. 
          And are we going to get a vote on the final deal, pitting Remain versus Hard Brexit? Of course not. And how long before the UK votes to rejoin the EU because Join is better than Hard Brexit? Probably, a long, long while. 

Monday, 20 March 2017

How to get rid of an incompetent manager?

In a paper, recently published in the International Journal of Game Theory, my wife and I analyze a game called a forced contribution threshold public good game. A nice way to illustrate the game is to look at the difficulties of getting rid of an incompetent manager.
         So, consider a department with n workers who all want to get rid of the manager. If they don't get rid of him then there payoff will be L. If they do get rid of him then there payoff will be H > L. But, how to get rid of him? He will only be removed if at least t or more of the workers complain to senior management. For instance, if a majority of staff need to complain then t = n/2.
        If t or more complain then the manager is removed and everyone is happy. The crucial thing, though, is what happens if less than t complain. In this case the manager will remain and any workers that did complain will face recrimination. To be specific suppose that the cost of recrimination is C. Then potential payoffs to a worker called Jack are as follows:

If t or more complain then Jack gets payoff H.
If Jack complains but not enough others do then he gets payoff L - C.
If Jack does not complain and others don't either then he gets payoff L.

Note that this game is called a 'forced' contribution game because, if the manager is removed, Jack's payoff does not depend on whether or not he complained. This contrasts with a standard threshold public good game in which those who do not contribute (i.e. complain) always have a relative advantage. Hence, there is a sense in which every worker is 'forced to contribute' if the manager is removed.
         The fear of recrimination is key to the game and going to be the potential source of inefficiency. In particular, if Jack fears that others will not complain then it is not in his interest to complain either. Hence we can obtain an inefficient equilibrium in which nobody complains and the manager carries on before. This is not good for the workers and presumably not good for the firm either. So, how can this outcome be avoided?
      In our paper we compare the predictions of three theoretical models and then report an experiment designed to test the respective predictions. Our results suggest that the workers will struggle to get rid of the manager if
This means that the threshold t should not be set too high. For instance, if a simple majority is needed to get rid of the manager, and so t = n/2, then we need H to about 25% higher than L. If less than a majority is enough then H does not need to be as high. This result would suggest that it is relatively simple to have a corporate policy that would incentivise people like Jack to complain about his manager.
           Of course, in practice there are almost certainly going to be some who will defend the manager and so things become more complex. Moreover, there are likely to be significant inertia effects. In particular, the 'better the devil you know' attitude may lead workers to underestimate the difference between H and L. Also senior managers may set t relatively high because of a desire to back managers. These are all things that will make it less likely Jack complains and more likely the incompetent manager continues. Firms, therefore, need to strike the right balance to weed out inefficiency.          

Saturday, 25 February 2017

Measuring risk aversion the Holt and Laury way

Attitudes to risk are a key ingredient in most economic decision making. It is vital, therefore, that we have some understanding of the distribution of risk preferences in the population. And ideally we need a simple way of eliciting risk preferences that can be used in the lab or field. Charles Holt and Susan Laury set out one way of doing in this in their 2002 paper 'Risk aversion and incentive effects'. While plenty of other ways of measuring risk aversion have been devised over the years I think it is safe to say that the Holt and Laury approach is the most commonly used (as the near 4000 citations to their paper testifies). 
         The basic approach taken by Holt and Laury is to offer an individual 10 choices like those in the table below. For each of the 10 choices the individual has to go for option A or option B. Most people go for option A in choice 1. And everyone should go for option B in choice 10. At some point, therefore, we expect the individual to switch from choosing option A to option B. The point at which they switch can be used as a measure of risk aversion. Someone who switches early is risk loving while someone who switches later is risk averse.



To properly measure risk aversion we do, though, need to fit choices to a utility function. This is where things get a little tricky. In the simplest case we would be able to express preferences using a constant relative risk aversion (CRRA) utility function 


where x is money. I can come back to why this is the simplest case shortly. First, let's have a quick look how it works. Suppose that someone chooses option A for choice 4. Then we can infer that

It is then a case of finding for what values of r this inequality holds. And it does for r less than or equal to around -0.15. Suppose the person chooses option B for choice 5. Then we know that

This time we get r grater than or equal to around 0.15. So, A person who switches between questions 4 and 5 has a relative risk aversion of between -0.15 and 0.15.
          Let us return now to the claim that a CRRA function keeps things simple. The dollar amounts in the choices above are small. What happens if we make them bigger? Holt and Laury tried multiplying them by 20, 50 and 90. Note that by the time we get to multiplying by 90 the amounts are up to $180 which is quite a lot of money for an experiment. If the CRRA utility function accurately describes preferences then people should behave exactly the same no matter how big the stakes. This would be ideal. Holt and Laury found, however, that people were far more likely to choose option A when the stakes were larger. Which means the CRRA function did not well describe subjects choices. 
           So, what does the rejection of CRRA mean? It tells us that just asking someone to make the 10 choices above is not enough to discern their preferences for risk. We learn what they would do for those magnitudes of money but cannot extrapolate from that to larger amounts. We cannot, for instance, say that someone is risk averse or risk loving because that person might appear risk loving for gambles over small amounts of money and risk averse for larger amounts. To fully estimate risk preferences we need to elicit choices over gambles with varying magnitudes of money.
       Despite all this, it is pretty standard to run the Holt and Laury approach at the end of experiments. The basic goal of doing so is to see if behavior in the experiment, say on public goods, correlates with attitudes to risk. Note that the simplicity of the Holt and Laury approach is a real draw here because you don't want to add something overly complicated to the end of an experiment. Care, though, is needed in interpreting results. As we have seen the Holt and Laury approach is not enough to parameterize preferences. All we can basically infer, therefore, is that one subject is relatively more or less risk averse or loving than another. This, though, is informative as a rough measure of how attitudes to risk influence behavior. Key, therefore, is to focus on relative rather than absolute comparisons.

Tuesday, 21 February 2017

Does a picture make people more cooperative

In a standard economic experiment the anonymity of subjects is paramount. This is presumably because of a fear that subjects might behave differently if they knew others were 'watching them' in some sense. In the real world, however, our actions obviously can be observed much of the time. So, it would seem important to occasionally step out of the purified environment of the standard lab experiment and see what happens when we throw anonymity in the bin.
        A couple of experiments have looked at behavior in public good games without anonymity. Let me start with the 2004 study of Mari Rege and Kjetil Telle entitled 'The impact of social approval and framing on cooperation in public good games'. As is standard, subjects had to split money between a private account and group account, where contributing to the group account is good for the group. The novelty is in how this was done.
      Each subject was given some money and two envelopes, a 'group envelope' and 'private envelope', and asked to split the money between the envelopes. In a no-approval treatment the envelopes were then put in a box and mixed up before they were opened up and the contributions read out aloud. Note that in this case full anonymity is preserved because the envelopes are mixed up. In an approval treatment, by contrast, subjects were asked to publicly open their envelopes and write the contribution on the blackboard. Here there is zero anonymity because the contribution of each subject is very public.
        Average contributions to the group account were 44.8% (of the total amount) in the no-approval treatment and 72.8% in the approval treatment. So, subjects contributed a lot more when anonymity was removed.
        Similar results were obtained by James Andreoni and Ragan Petrie in a study entitled 'Public goods experiments without confidentiality'. Here, the novelty was to have photos of subjects together with their contributions to the group account, as in the picture below. In this case contributions increased from 26.9% in the absence of photos to 48.1% with photos. Again subjects contributed a lot more when anonymity was removed.

 
         So, why does anonymity matter? A study by Anya Samek and Roman Sheremeta, entitled 'Recognizing contributors' sheds some light on this. As well as treatments with no photos and everyone's photos they had treatments in which only the lowest and only the highest contributors had their photos displayed, as in the middle picture below.


          Again, photos made a big difference, increasing average contributions from 23.4% to 44.2%. Interestingly, displaying the photos of top contributors made little difference (up to 27.8%) while displaying the photos of the lowest contributors made a big difference (up to 44.9%). This would suggest that contributions increase without anonymity because subjects dislike being the lowest contributors. So, we are talking shame rather than pride.
       What do we learn from all this? Obviously we can learn interesting things by dropping anonymity.  In particular, we have learnt that contributions to group projects may be higher when individual contributions can be identified. Indeed, in a follow paper, entitled 'When identifying contributors is costly', Samek and Sheremeta show that the mere possibility of looking up photos increases contributions. That, though, raises some tough questions. If behavior is radically different without anonymity then is it good enough to keep on churning out results based on lab experiments with complete anonymity? I don't think it is. The three studies mentioned above have shown how anonymity can be dropped without compromising scientific rigor. More of that might be good. 

Saturday, 4 February 2017

Kindness or confusion in public good games

The linear public good game is, as I have mentioned before on this blog, the workhorse of experiments on cooperation. In the basic version of the game there is a group of, say, 4 people. Each person is given an endowment of, say, $10 and asked how much they want to contribute to a public good. Any money a person does not contribute is theirs to keep. Any money that is contributed is multiplied by some factor, say 2, and shared equally amongst group members.
         Note that for every $1 a person does not contribute they get a return of $1. But, for every $1 they do contribute they get a return of $0.50 (because the $1 is converted to $2 and then shared equally amongst the 4 group members). It follows that a person maximizes their individual payoff by contributing 0 to the public good. Contributing to the public good does, however, increase total payoffs in the group because each $1 contributed is converted to $2. For example, if everyone keeps their $10 then they each get $10. But, if everyone contributes $10 then they each get $20.
         The typical outcome in a public good experiment is as follows: Average contributions in the first round of play are around 50% of endowments. Then, with repetition, contributions slowly fall to around 20-30% of endowments. So, what does that tell us?
          The standard interpretation is to appeal to kindness. Lots of people are seemingly willing to contribute to the public good. This is evidence of cooperative behaviour. The fall in contributions can then be explained by reciprocity. Basically, those who contribute get frustrated at those who do not contribute and so lower their own contribution over time. There is no doubt that this story fits observed data. But, there is another, arguably much simpler, explanation.
          This alternative interpretation appeals to confusion. Imagine what would happen if people did not understand the experiment instructions? Then we could expect random choice in the first round which would give contributions around 50%. And then we would expect contributions to fall over time as subjects better understand the game. This also fits the pattern of observed behaviour quite well. So, how to rule out confusion?
          One approach is to come up with a game which is similar to a linear public good game in terms of instructions but where there is no role for kindness. James Andreoni, in his study Cooperation in public-goods experiments: kindness or confusion, looked at one possibility. He considered a game where subjects are ranked based on how much they contribute to the public good and then paid based on their rank, with those who contribute least getting the highest payoff. The crucial thing about this game is that it is constant-sum meaning that group payoffs are the same no matter what and so there is no possibility to be cooperative. Indeed, there is no incentive to do anything other than contribute 0. Contributions in this Rank game can, therefore, be attributed to confusion rather than kindness.  
          The graph below summarises the results. In the standard game (Regular) we see that average contributions start just over 50% and then fall to 30%. In the Rank game they start at 30% and fall to around 5%. If we interpret contributions in the Rank game as confusion then we can see that around half of contributions in the first round are confusion but confusion soon disappears as a factor.


             If the approach taken by Andreoni was beautifully subtle, then that taken by Daniel Houser and Robert Kurzban, in a follow up study Revisiting kindness and confusion in public goods experiments, was remarkably blunt. They considered a game where a person was in a group with three computers. Clearly, if the other group members are computers then there is nothing to be gained by kindness. Again, therefore, any positive contribution can be interpreted as confusion.
           The graph below summarizes the results. In this case confusion seems to play a bigger role. Most contributions in the first round seem to be confusion because there is not much difference between playing with humans or computer. Moreover, contributions take longer to fall in the Computer game than they did in the Rank game.. 


           So what should we take from this? At face value the results of both studies give cause for concern. It seems that a significant part of behaviour is driven by confusion rather than kindness, particularly in the first round. There is, therefore, a danger of over-interpreting results. Things, however, may not be as bad as this would suggest. First, the Rank game is a bit complicated and the Computer game a bit weird and so there is more scope for confusion here than in the standard public good game. For instance, subjects may not really get the point they are playing with computers.
           We can also analyse individual data in more detail to test specific theories of reciprocity. If people behave systematically relative to past history of contributions then we can be more confident that confusion is not the main thing at work. Recent studies are more in this direction. This though reiterates the need for experiments to work alongside theory. Without a theory it is much harder to distinguish confusion from systematic behaviour and confusion may be an important driver of behaviour in the lab.  

  

Saturday, 7 January 2017

Schelling, Brexit and Trump: Conflict is rarely a zero-sum game

Few, if any, have contributed as much to game theory as Thomas Schelling. Or, to perhaps be more accurate, surely nobody has more powerfully shown the value of applying game theory to understand the world around us. As we reflect on Schelling's contribution to knowledge, following his death in December, I think it is particularly useful to look back on one of his less touted but fundamental observations - conflict is rarely a zero-sum game.
          To put Schelling's insight in perspective it is important to recognise that the early development of game theory was hugely influenced by zero-sum games. These are games in which total payoffs always sum to zero meaning that one player's gain must be another player's loss. Sporting and parlour games, like chess and bridge, are naturally modelled as zero-sum because they are about winning and losing. Zero-sum games also have some nice theoretical properties which mean they are particularly amenable to analysis. For this latter reason, more than any other, by the 1950's game theory was increasingly becoming the study of zero-sum games.  
           Against this background, Schelling published in 1958 an article in the Journal of Conflict Resolution entitled The strategy of conflict prospectus for a reorientation of game theory. His ground-breaking book, The Strategy of Conflict, followed in 1960. The opening paragraph of his original paper sets the scene:

On the strategy of pure conflict - the zero sum games - game theory has yielded important insight and advice. But on the strategy of action where conflict is mixed with mutual dependence - the non-zero-sum games involved in wars and threats of war, strikes, negotiations, criminal deterrence, class war, race war, price war, and blackmail; manoeuvring in a bureaucracy or in a social hierarchy or in a traffic jam; and the coercion of one's own children - traditional game-theory has not yielded comparable insight or advice.

Game theory, therefore, needed to reorient itself away from zero-sum games. A fundamental part of the argument, clear in the range of examples Schelling gave in the quote above, is that most conflict is not zero-sum.
          One of the main examples Schelling used was the cold war. The U.S. and Soviet Union were clearly in extreme conflict. But that does not mean there was not scope for mutual gain. Comparing two possible scenarios easily illustrates the point. Scenario 1: The U.S. and Soviets throw nuclear weapons at each other causing mass devastation and millions of civilian casualties. Scenario 2: The U.S. and Soviets don't fire any nuclear weapons and there are no civilian casualties. Clearly, scenario 2 is considerably better for both the U.S. and Soviets than scenario 1.
         As this example illustrates, conflict does not preclude potential gains from 'coordinating' or 'cooperating' on a mutually beneficial outcome - in this case avoiding nuclear war. Or, to quote Schelling:

These are games in which, though the element of conflict provides the dramatic interest, mutual dependence is part of the logical structure and demands some kind of collaboration or mutual accommodation - tacit, if not explicit - even if only in the avoidance of mutual disaster.

I particularly like the allusion to 'conflict provides the dramatic interest'. With that in mind let us look at Brexit and Trump.
         Brexit is a conflict between the U.K. and E.U. and is clearly not zero-sum. If trade negotiations go badly then both will suffer. If they go well then disaster can be averted. The popular press, and Brexiters, seem, however, to prefer to portray the conflict as zero-sum. Particularly telling are the arguments over whether the U.K. government should give details of its key negotiating demands. The government argues that showing its hand would weaken its bargaining position. Nonsense. This is a logic based on a zero-sum conflict like bridge, not a negotiation where mutual accommodation is essential. To quote Schelling again:

These are also games in which, though secrecy may play a strategic role, there is some essential need for the signalling of intentions and the meeting of minds.

In truth, I think the government plays along with the zero-sum narrative because it provides a convenient shield to hide the fact they don't have a plan. It is, though, interesting to see how easily the public buys the narrative.
          And so to Trump, where just about every conflict is portrayed as zero-sum. Trade with China is a zero-sum game, as is immigration from Mexico, climate change, and so on. Clearly, none of these issues are remotely zero-sum. Again, however, a narrative of pure conflict seems to go down well with a large proportion of the public. Possibly because it is easier to get excited about someone who will 'defend your interests against the enemy' rather than 'negotiate a mutually beneficial compromise'.  
          Looking back, there is no doubt that Schelling's call for a reorientation of game theory had an effect. Today, zero-sum games are considered by game theorists to be the theoretical extreme case that Schelling argued they were. It is much more likely that the prisoners dilemma or ultimatum game get the attention. Outside of academic circles, however, it would seem that Schelling's critical insight remains poorly understood. Many seemingly prefer to view conflict as zero-sum. Hopefully, we can still avert disaster.