Friday, 8 June 2018

Contestable markets: Can you have monopoly and perfect competition at the same time?

Last Sunday the sun was out and the children's playground was full of kids and their families. As usual the ice cream van was nearby with a steady stream of willing customers. Then something unexpected happening - another ice cream van turned into the car park. What would happen? Well, the driver saw he was not alone, turned around and left. So, we missed out on any particular excitement. Even so, this brief encounter is a nice illustration of the concept of contestable markets.
    The standard textbook typically associates the extent of competition with the number of firms in the market. A monopoly has one firm and perfect competition has a large number of firms. Simple enough. But, also misleading, bordering on plain wrong. It is more accurate to measure competition, not by the number of firms, but by the restrictions on entry to the market and the standardization of goods in the market.
    To illustrate the issues consider our ice cream van. Suppose the local council has a system for allocating a permit to operate near the playground. And they issue only one permit. Then the firm that gets the permit has monopoly power. That power comes from the fact that only they are allowed to operate - there are barriers to entry. In this scenario the ice cream van would be able to charge monopoly prices. For instance, suppose the marginal cost of selling an ice cream is £1.50. There is nothing to stop the monopolist charging say £2.00. Less people will buy for £2.00 than £1.50 but the net effect on profits may well be positive.
    Now consider the scenario where anyone can come and set up an ice cream van near the playground. Moreover, suppose that there are no costs at all to doing this. Does the single ice cream seller still have monopoly power? No because if it charges more than £1.50 another ice cream van will soon come along and undercut. And, given the product is standardized nobody is going to buy an ice cream at, say, £2.00 if they can buy it next door for £1.75. The threat of competition, therefore, keeps prices at marginal cost. This is the basic notion of contestable markets.
     Contestable markets mean that the number of sellers can be misleading. In particular, you could have only one seller but still have perfect competition. The one seller clearly satisfies the legal definition of monopoly because she has 100% market share. But the threat of entry means that she does not have market power to influence price. Hence, she does not meet the economic criteria for monopoly.
     Clearly the idea of free entry to a market is unrealistic. For instance, it costs money and time to drive an ice cream van to the playground to check what is going on. That is a barrier to entry. Generally speaking, however, the lower the cost of entry the less power firms have to influence price. This is what really drives competition in the long run.
     So, why do the textbooks focus on the number of firms? The number of firms may be a proxy for the ease of entry into the market. It is, however, not a perfect correlation. One can think of many contexts where a firm might have a large market share but if it were to push up prices too high someone else would come in and undercut. Moreover, a large number of firms in a market may be a sign of capacity constraints rather than lack of market power. Consider, for example, hotels and restaurants on a busy holiday weekend. There are lots of them but they can still hike up prices.
      There is, though, particularly in the short run, a big difference between one firm in the market and two or more. That second firm makes a difference because it provides direct rather than threatened competition. Once you have two firms then a third has much less of an impact. Which is why it was somewhat disappointing the second ice cream van didn't set up shop and give some entertainment. 




Saturday, 28 April 2018

Social value orientation in experimental economics, part I

The basic idea behind social value orientation (SVO) is to gain a snapshot of someone's social preferences. Are they selfish and simply do the best for themselves without caring about the payoff of others? Are they competitive and want to earn more than others (even if that means sacrificing own payoff)? Are they inequality averse and want to earn the same as others? Or are they pro-social and want to maximize the payoff of others? SVO is a tool most closely associated with social psychology, but there is no doubt that it has a useful role to play in economics.

A contribution that should be particularly interesting to economists is a recent meta-analysis published in the European Journal of Personality by Jan Luca Pletzer and co-authors. The analysis provides evidence on the connection between SVO, beliefs and behavior, which could feed into debates around reciprocity and psychological game theory. But I'm not going to talk about that study yet. Instead, I will do a couple of posts in which I explain different ways to measure SVO. Then I can get to the heart of why SVO can be useful for economists.    

The first economics study I know of that used SVO was published in 1996 by Theo Offerman, Jeop Sonnemans and Arthur Schram in the Economic Journal. There are many ways to elicit SVO. Here I will look in some detail at the approach they used, which is called the decomposed game technique or ring technique. To get us started consider the 24 different allocations in the table below. For instance, allocation a means $0 for yourself and $15 for some other person that you are matched with. Option b means $3.9 for yourself and $14.5 for the the other person, and so on. These 24 allocations neatly fit around a circle varying from a lot for both of you (allocation d) to not much for either of you (allocation p).  



To elicit SVO subjects are given 24 decision tasks in which they need to choose between pairs of allocations from the circle. Specifically, they are asked if they would prefer allocation a or b, then if they would prefer b or c, then c or d, and so on, all around the circle. The slightly tricky thing is then converting those 24 choices into a measure of SVO. Here different studies take different approaches. The approach Offerman and co-authors take is to use the observed motivational vector. This works by adding up the total amount given to self and total amount given to other (over the 24 choices). From that we get a vector in the circle. The direction of that vector is used to measure value orientation.

The table below works through two examples. First we have an individualistic person who simply chooses whichever choice maximizes his own payoff. If you add up all his payoffs he overall gives 30 to himself and 0 to the other person. The angle this makes relative to the horizontal is 0 degrees. Next we have a cooperative person who makes 6 different choices, highlighted in yellow. In these 6 choices the individual sacrifices a little of his own payoff to the benefit of the other person. Ultimately both him and the other person end up with a total payoff of 21.2. This more pro-social  behavior means the motivational vector is 45 degrees to the horizontal.   




Having worked out the angle of the observed motivational vector we can then classify SVO. (To work out the angle we need a bit of high school trigonometry, using arctan(other/self).) The figure below summarizes the classification. Anyone with a vector between -22.5 and 22.5 degrees is classified as individualistic - they care mainly about self. Anyone between 22.5 and 67.5 is classified as cooperative - these are somewhat pro-social towards others. While there are five categories in all it is individualistic and cooperative that matter most. For example, in the study by Offerman and co-authors, 65% of subjects were individualistic and 27% were cooperative. This split is fairly typical.



As I have already said, the method described above is only one of many ways to elicit SVO. But it is a relatively easy method for economists to use. And actually gives you two measures: the angle of the observed motivational vector allows you to classify SVO while the length of the vector gives you a measure of consistency of choice. The longer the vector the more consistent is the person to their classified SVO. Indeed, you sometimes find subjects who overall give 0 to themselves and 0 to the other person which suggests their choices are simply random.

So what to do with the SVO once you have it? I will come back to the issue looked at by Offerman and co-authors in a later post. Here I will look at a slightly simpler issue considered by Eun-Soo Park and published in the Journal of Economic Behavior and Organization. Park looks at framing effects in public good games and the tendency for contributions to be higher in a positive frame - contribute and it benefits the group - than a negative frame - keep for yourself and it harms the group. By measuring SVO using the decomposed game technique Park finds that the framing effect is driven by individualistic types. The figure below illustrates how stark the effect was. With cooperative types there is no sign of any framing effect, but for individualistic types the effect is large. That finding can potentially give us important clues as to why we observe a framing effect. In particular, it suggests that selfish people can be induced to cooperate given the right frame.





Friday, 23 March 2018

Cooperation in the infinitely (or indefinitely) repeated prisoners dilemma

One of the more famous and intriguing results of game theory is that cooperation can be sustained in a repeated prisoners dilemma as long as nobody knows when the last game will be played. To set out the basic issue consider the following game between Bob and Francesca.


If they both cooperate they get a nice payoff of 10 each. If they both defect they get 0 each. Clearly mutual cooperation is better than mutual defection. But, look at individual incentives. If Francesca cooperates then Bob does best to defect and get 15 rather than 10. If Francesca defects then Bob does best to defect and get 0 rather than -5. Bob has a dominant strategy to choose defect. So does Francesca. We are likely to end up with mutual defection.

But what if Bob and Francesca are going to play the game repeatedly with each other? Intuitively there is now an incentive to cooperate in one play of the game in order to encourage cooperation in subsequent plays of the game. To formalize that logic suppose that whenever Bob and Francesca interact then with probability p they will interact again tomorrow. Also suppose that both Bob and Francesca employ a grim trigger strategy - I will cooperate unless you defect and if you defect I will defect forever after. Can this sustain cooperation?

If Bob and Francesca cooperate in every play of the game the expected payoff of Bob is 10 for as long as they keep on playing, which gives
If Bob defects now his expected payoff is 15 because he gets a one time benefit and then has to settle for mutual defection from then on. It follows that cooperating makes sense if 10/(1 - p) > 15 or p > 0.33. It should be emphasized that this story relies on Bob and Francesca both using a grim trigger strategy and both expecting the other to use it. Even so, cooperation can, in principle, be sustained if p is high enough. By contrast, if people know when the end is likely to come (p is small) then there is no hope of sustaining cooperation.

What of the evidence? A paper recently published by Pedro Dal Bo and Guillaume Frechette in the Journal of Economic Literature surveys the evidence. They fit a model to a meta-data set of over 150,000 choices from relevant studies. The Figure below summarizes some of the findings that come out of that model. In interpreting this figure we need to understand that in most experiments subjects play the repeated game several times against different opponents. So, Bob plays with Francesca for, say, 10 rounds (determined randomly according to p). This is supergame 1. He then plays with Claire for, say, 5 rounds (again randomly determined according to p). This is supergame 2. And so on.

The figure below shows the fitted probability of a subject cooperating in round 1 of supergame 1 and of round 1 of supergame 15. Look first at supergame 1. Here we can see that around 50-60% of subjects cooperate - which is quite high - and the probability of cooperating does not depend much on p. This is inconsistent with the theory because we would not expect such high levels of cooperation when p is low. What about in supergame 15? Here we see a much higher dependence on p. This is starting to look more consistent with the theory because we see low levels of cooperation when p is low. 



So, can cooperation be sustained in a repeated prisoners dilemma? The relatively high levels of cooperation seen in the above figure may give some optimism. But it is important to appreciate that cooperation is only going to be sustained if both people cooperate. If there is a 50% chance a random individual will cooperate then there is only a 25% chance they will start with mutual cooperation. This does not look so good. And it turns out that 'always defect' consistently shows up as the most popular strategy employed when playing the prisoners dilemma. The chances of sustained cooperation among two strangers seem, therefore, somewhat remote.

All hope, though, is not lost because life is not only about interaction between strangers. Once we add in reputation, choosing who your friends are, and so on, there are various mechanisms that may be able to sustain cooperation. And, even putting that aside, there are still strong arguments to try and cooperate with strangers. As David Kreps, Paul Milgrom, John Roberts and Robert Wilson pointed out in a well-cited paper it is not always in your interest to defect in the first round of a prisoners dilemma. Basically, if the other person wants to be cooperative then by defecting you miss out long term. Better to cooperate and give the other person a chance.

Wednesday, 28 February 2018

How many subjects in an economic experiment?

How many subjects should there be in an economic experiment? One answer to that question would be to draw on power rules for statistical significance. In short, you need enough subjects to be able to reasonably reject the null hypothesis you are testing. This approach, though, has never really been standard in experimental economics. There are two basic reasons for this - practical and theoretical. 

From a practical point of view the power rules may end up suggesting you need a lot of subjects. Suppose, for instance, you want to test cooperation within groups of 5 people. Then the unit of observation is the group. So, you need 5 subjects for 1 data point. Let's suppose that you determine you need 30 observations for sufficient power (which is a relatively low estimate). That is 30 x 5 = 150 subjects per treatment. If you want to compare 4 treatments that means 600 subjects. This is a lot of money (at least $10,000) and also a lot of subjects to recruit to a lab. In simple terms, it is not going to happen.  

That my appear to be sloppy science but there is a valid get-out clause. Most of experimental economics is about testing a theoretical model. This allows for a Bayesian mindset in which you have a prior belief about the validity of the theory and the experimental data allows you to update that belief. The more subjects and observations you have the more opportunity to update your beliefs. But even a small number of subjects is useful in updating your beliefs. Indeed, some of the classic papers in experimental and behavioral economics have remarkably few subjects. For instance, the famous Tversky and Kahneman (1992) paper on prospect theory had only 25 subjects. That did not stop the paper becoming a classic.

Personally I am a fan of the Bayesian mindset. This mindset doesn't, though, fit comfortably with how economic research is typically judged. What we should be doing is focusing on a body of work in which we have an accumulation of evidence, over time, for or against a particular theory. In practice research is all too often judged at the level of a single paper. That incentivizes the push towards low p values and an over-claiming of the significance of a specific experiment. 

Which brings us on to the replication crisis in economics and other disciplines. A knee-jerk reaction to the crisis is to say we need ever-bigger sample sizes. But, that kind of misses the point. A particular experiment is only one data point because it is run with a specific subject pool using a specific protocol. Adding more subjects does not solve that. Instead we need replication with different subject pools under different protocols - the slow accumulation of knowledge. And we need to carefully document research protocols.

My anecdotal impression is that journal editors and referees are upping the ante on how many subjects it takes to get an experiment published (without moving things forward much in terms of documenting protocols). To put that theory to a not-at-all scientific test I have compared the papers that appeared in the journal Experimental Economics in its first year (1998) and most recent edition (March 2018). Let me emphasize that the numbers here are rough-and-ready and may well have several inaccuracies. If anyone wants to do a more scientific comparison I would be very keen to see it. 

Anyway, what do we find? In 1998 the average number of subjects was 187, which includes the study of Cubbit, Starmer and Sugden where half the population of Norwich seemingly took part. In 2018 the average is 383. So, we see an increase. Indeed, only the studies of Cubbit et al. and Isaac and Walker are above the minimum in 2018. The number of observations per treatment are also notably higher in 2018 at 46 compared to 1998 when it was 16. Again, those numbers are almost certainly wrong (for instance the number of independent observations in Kirchler and Palan is open to interpretation). The direction of travel, though, seems clear enough. (It is also noticeable that around half of the papers in 1998 were survey papers or papers reinterpreting old data sets. Not in 2018.)  


At face value we should surely welcome an increase in the number of observations? Yes, but only if it does not come at the expense of other things. First we need to still encourage replication and the accumulation of knowledge. Experiments with a small number of subjects can still be useful. And, we also do not want to create barriers to entry. At the top labs running an experiment is relatively simple - the money, subject pool, programmers, lab assistants, expertise etc. are there and waiting. For others it is not so simple. The more constraints we impose for an experiment to count as 'well-run' the more experimental economics may potentially become 'controlled' by the big labs. If nothing else, that poses a potential problem in terms of variation in subject pool. Big is, therefore, not necessarily better. 

Sunday, 28 January 2018

Would you want to be an expected utility maximizer

I have finally got around to reading Richard Thaler's fantastically wonderful book on Misbehaving. One thing that surprised me in the early chapters is how Thaler backs expected utility theory as the right way to think. Deviations from expected utility are then interpreted as humans not behaving 'as they should'. While I am familiar with this basic argument it still came as a surprise to me how firmly Thaler backed expected utility theory. And I'm not sure I buy this argument. 

To appreciate the issue consider some thought experiments. Thaler gives the following example:

Stanley mows his lawn every weekend and it gives him terrible hay fever. I ask Stan why he doesn't hire a kid to mow his lawn. Stan says he doesn't want to pay the $10. I ask Stan whether he would mow his neighbor's lawn for $20 and Stan says no, of course not.

From the point of view of expected utility theory Stan's behavior makes no sense. What we should do is calculate the recompense Stan needs to mow a lawn. That he will not pay the kid shows that he values $10 more than mowing a lawn. That he will not mow his neighbor's lawn shows that he does not value $20 more than mowing his lawn. But how can mowing a lawn be worth less than $10 and more than $20. It cannot! His choice makes no sense.

Here is another example (which I have adapted a bit):

Morgan got free tickets to a high profile NBA basketball game. Tickets are selling on the legal second hand market for $300. Morgan says that he is definitely going to the game. Asked if he would pay $200 for a ticket he says 'of course not'.

Again, from the point of expected utility theory Morgan's choices are nonsense. What we should do here is ask how much he values going to the game. He explicitly says that he would not be willing to pay $200. Yet by going to the game he misses out on selling his tickets for $300. So, it looks as though he values the game more than $300. But how can going to the game be worth less than $200 and more than $300. It cannot! His choices also make no sense. 

What to do with these two examples. The key question is this: Do you think Stanley and Morgan would change their choices if the 'irrationality' of their choices are explained to them? Personally, I think not. To them their behavior probably seems perfectly sensible, and who are we to argue against that? One response to this would be to 'add on' factors that influence preferences such as 'I prefer mowing my own lawn' or 'I don't like giving away tickets'. This can rescue expected utility theory but is precisely the kind of ad-hocness that behavioral economics is trying to get away from. So, I think it is just better to accept that expected utility is not necessarily the right way to make decisions.

Does this matter? Mainstream economics is built upon the premise that expected utility theory is a good representation of how people make decisions. The work of Thaler and others has blown that idea out of the water. So, whether or not expected utility is the right way to do things is rather a mute point. There is still merit in learning how a person should behave if their preferences satisfy certain basic axioms.  

Things are more complicated when we look at the heuristics and biases approach led by Daniel Kahneman and Amos Tversky. Here the biases suggests that there is a right way to do things! It is worth clarifying, however, that much of this work relates to probabilistic reasoning where there is a clearly defined right way of doing things. I suggest that we just have to be a bit careful extending the biases terminology to choice settings where there may not be a right way of doing things. For instance, is loss aversion a bias? Put another way, is it irrational to behave in one way when things are framed as losses and another way when the same choice is framed in terms of gains? It is certainly different to how economists have traditionally modeled things. But it still seems perfectly sensible to me (and may make evolutionary sense) that someone could be influenced by the frame. Maybe, therefore, we need to call it the status-quo-effect rather than status-quo-bias. This, though, is surely more of a matter of semantics rather than anything particularly substantive.

Things are also a bit complicated when we come to policy interventions. For instance, the nudge idea kind of suggests there is a right way of doing things that we can nudge people towards. But then there are so many situations where people make unambiguously bad choices (like saving for retirement) that we can still be excited about nudge without getting too caught up on whether expected utility theory is always the best way to make decisions. And remember the basic idea behind nudge is that we should never restrict choice anyway.

So, whether or not it is rational to maximize expected utility is probably not a big issue. It just means, to use Thaler's well known terminology, that: Humans may not be quite as dumb as Thaler claims, but they are undeniably very different to the Econs you find in a standard textbook.

Saturday, 30 December 2017

Behavioral economics or experimental economics

My holiday reading started with the book Behavioral Economics: A History by Floris Heukelom. The book provides a interesting take on how behavioral economics has grown from humble beginnings to the huge phenomenon that it now is. A nice review of the book has been written by Andreas Ortmann and so I will not delve too deeply into general comment here, other than to say I enjoyed reading the book. 

But in terms of more specific comment, one theme running throughout the book is the distinction between behavioral economics and experimental economics. Heukelom makes clear that he thinks there is a very sharp distinction between these two fields. Personally I have always thought of them both as part of one big entangled blob. There are people who clearly prefer to label themselves a behavioral economist or an experimental economist but this seemed to me more a matter of personal preference than any grand design. So, what is the difference between behavioral and experimental economics?

Heukelom's viewpoint is based on a very narrow definition of experimental economics and behavioral economics. Specifically, he associates experimental economics with the work of Vernon Smith on experimental asset markets and he associates behavioral economics with the work of Kahneman, Tversky and Thaler, particularly with regard to prospect theory. The gap between research on market efficiency in the lab and that on prospect theory is indeed large. For instance, the first is more focused on market institutions and ecological rationality (i.e. how do markets work) while the later is focused on individual decision making and individual rationality (i.e. how do people behave). So, here a neat dividing line does potentially exist.

The problem with this view is that experimental asset markets are, and long have been, only one small part of work that must surely fall under the umbrella of experimental economics. (See, for instance, the short summary on the early years of experimental economics by Alvin Roth.) Similarly, prospect theory is only one small part of work that must fall under the umbrella of behavioral economics. For example, one elephant in the room here is game theory. From its very beginnings game theory has had an experimental side which has grown alongside work on markets. For instance, experiments with the prisoners dilemma and social dilemmas more generally began in the 1950s, if not before, and are generally seen as a big part of experimental economics. Similarly, a big part of behavioral economics has been to understand social preferences and move away from the standard economic assumption of selfishness. Indeed, the dictator game, which is now a mainstay of experimental economics, was first used by Kanheman, Knetsch and Thaler in a paper published in 1986.

In short, everything is mixed up. Other ways of trying to find a neat dividing line between behavioral and experimental economics would also seem doomed to end up with a mess. For instance, at the end of the book Heukelom associates modern behavioral economics with the use of mathematical methods. But that would seemingly exclude a host of behavioral economics, Dan Ariely to name just one, whose work is not particularly characterized by the use of mathematics. Similarly, experimental economists, like Robert Sugden and Chris Starmer, have been prominent in recent developments in prospect theory.   

This is not to say that experimental and behavioral economics are the same. Experimental economics is characterized by a method of doing things - namely experiments - while behavioral economics (although much harder to tie down) is more characterized by an objective to understand how people reason in economic contexts. The trouble is it is hard to see how the one can be done without the other. Pushed to the limits it may be possible to study experimental markets without being bothered with individual behavior. Or to work on individual behavior without recourse to lab or field experiments. The truth, though, is surely that the two go very much hand in hand and, given that we are talking about the history of behavioral economics, always have done.   

An interesting question is how things will develop in the future. Both the terms experimental and behavioral economics are essentially referring to methods. In the infancy of something like experimental economics it is natural that someone doing experiments would use a label like experimental economics to distinguish what they are doing. But the more routine it becomes for the 'average' economist to use experiments or draw on behavioral theory the less relevant the labels would seem to be. Instead we could gravitate towards a focus on applications with more use of the labels like public, labor and development economics. Behavioral economics is, though, presumably too much of a buzz phrase for that to happen any time soon. 

Wednesday, 8 November 2017

Rank dependent expected utility

Prospect theory is most well known for its assumption that gains are treated differently to losses. Another crucial part of the theory, namely that probabilities are weighted, typically attracts much less attention. Recent evidence, however, is suggesting that probability weighting has a crucial role to play in many applied settings. So, what is probability weighting and why does it matter?

The basic idea of probability weighting is that people tend to overestimate the likelihood of events that happen with small probability and underestimate the likelihood of events that happen with medium to large probability. In their famous paper on 'Advances in prospect theory', Amos Tversky and Daniel Kahneman quantified this effect. They fitted experiment data to equation


where γ is a parameter to be estimated. In interpretation, p is the actual probability and π(p) the weighted probability. The figure below summarizes the kind of effect you get. Tversky and Kahneman found that a value of γ around 0.61 best matched the data. This means that something which happens with probability 0.1 gets a decision weight of around 0.2 (overweighting of small probabilities) while something that happens with probability 0.5 gets a decision weight of only around 0.4 (underweighting of medium to large probabilities).  



Why we observe this warping of probabilities is unclear. But the consequences for choice can be important. To see why consider someone deciding whether to take on a gamble. Their choice is either to accept £10 for certain or gamble and have a 10% chance of winning £90 and a 90% chance of winning nothing. The expected value of this gamble is 0.1 x 90 = £9. So, it does not look like a good deal. But, if someone converts a 10% probability into a decision weight of 0.2 we get value 0.2 x 90 = £18. Suddenly the gamble looks great! Which might explain the appeal of lottery tickets.

There is, though, a problem. It is not enough to simple weight all probabilities. This, as I will shortly explain, doesn't work. So, we need some kind of trick. While prospect theory was around in 1979 it was not until the early 1990's that the trick was found. That trick is rank dependent weighting. The gap of over 10 years in finding a way to deal with probabilities may help explain why probability weighting has had to play second fiddle to loss aversion. Lets, though, focus on the technical details.

Consider the example. Here there are no obvious problems if we just weight probabilities. The 10% chance of winning is converted into a 0.2 decision weight while the 90% chance of losing is converted into a 0.7 decision weight. The overall expected value is then 0.2 x £90 = £18. Everything looks fine.

So, consider another example. Suppose that the sure £10 is now a gamble with a 10% chance of winning £10.09, a 10% chance of winning £10.08, a 10% chance of winning £10.07, and so on, down to a 10% chance of winning £10. If we just simply weight all these 10% probabilities as 0.2 then we get expected value of 0.2 x 10.09 + 0.2 x 10.08 + ... + 0.2 x 10 = £20.09. This is absurd. A gamble that essentially gives £10 cannot be worth over £20! You might say that the problem here is we have ended up with a combined weight of 2. If, though, we normalize weights to 1 we will not have captured the over-weighting of small probabilities. So, normalizing is not, of itself, a solution. 

The problem with the preceding approach is that we have weighted everything - good or bad - by the same amount. Rank dependent probability does away with that. Here we rank outcomes from best to worst. The decision weight we place on an outcome is then the weighted probability of the outcome or something better minus the weighted probability of something better

In our original gamble the best outcome is £90 and the worst is £0. The weight we put on £90 is around 0.2 because there is 10% chance of £90, no chance of anything better, and a 10% probability is given weight 0.2. The weight we put on £0 is 0.8 because it is the weighted probability of £0 or better, namely 1, minus the weighted probability of £90, namely 0.2. So, not much changes in this example.

In the £10 gamble the best outcome is £10.09, the next best £10.08, and so on. The decision weight we but on £10.09 is around 0.2 because there is a 10% chance of £10.09 and no chance of anything better. Crucially, the weight we put on £10.08 is only around 0.1 because we have the weighted probability of £10.08 or better, a 20% chance that gives weight around 0.3, minus the weighted probability of £10.09, around 0.2. You can verify that the chance of winning £10.07, £10.06 and so on has an even lower decision weight. Indeed, decision weights have to add to 1 and so the high weight on £10.09 is compensated by a lower weight on other outcomes. For completeness the table below gives the exact weights you would get with the Tversky and Kahneman parameters. Given that decision weights have to add to 1 the expected value is going to be around £10. Common sense restored!




Generally speaking, rank dependent weighting means that we capture, and only capture, over-weighting of the extreme outcomes. So, we capture the fact a person may be overly optimistic about winning £90 rather than £0 without picking up the perverse prediction that every unlikely event is over-weighted. The discussion so far has focused on gains but we can do the same thing with losses. Here we want to capture, and only capture, over-weighting of the worst outcomes. 

So why does all this matter? There is mounting evidence that weighting of probabilities can explain a lot of behavior, including the equity premium puzzle, long shot bias in betting and willingness of households to buy insurance at highly unfavorable premiums. For a review of the evidence see the article by Helga Fehr-Duda and Thomas Epper on 'Probability and risk: Foundations and economic implications of probability-dependent risk preferences'. It is easy to see, for instance, why overweighting of small probabilities could have potentially profound implications for someone's view of insurance. A very small probability of loss may be given a much higher decision weight. That makes insurance look like a good deal.