Wednesday, 28 February 2018

How many subjects in an economic experiment?

How many subjects should there be in an economic experiment? One answer to that question would be to draw on power rules for statistical significance. In short, you need enough subjects to be able to reasonably reject the null hypothesis you are testing. This approach, though, has never really been standard in experimental economics. There are two basic reasons for this - practical and theoretical. 

From a practical point of view the power rules may end up suggesting you need a lot of subjects. Suppose, for instance, you want to test cooperation within groups of 5 people. Then the unit of observation is the group. So, you need 5 subjects for 1 data point. Let's suppose that you determine you need 30 observations for sufficient power (which is a relatively low estimate). That is 30 x 5 = 150 subjects per treatment. If you want to compare 4 treatments that means 600 subjects. This is a lot of money (at least $10,000) and also a lot of subjects to recruit to a lab. In simple terms, it is not going to happen.  

That my appear to be sloppy science but there is a valid get-out clause. Most of experimental economics is about testing a theoretical model. This allows for a Bayesian mindset in which you have a prior belief about the validity of the theory and the experimental data allows you to update that belief. The more subjects and observations you have the more opportunity to update your beliefs. But even a small number of subjects is useful in updating your beliefs. Indeed, some of the classic papers in experimental and behavioral economics have remarkably few subjects. For instance, the famous Tversky and Kahneman (1992) paper on prospect theory had only 25 subjects. That did not stop the paper becoming a classic.

Personally I am a fan of the Bayesian mindset. This mindset doesn't, though, fit comfortably with how economic research is typically judged. What we should be doing is focusing on a body of work in which we have an accumulation of evidence, over time, for or against a particular theory. In practice research is all too often judged at the level of a single paper. That incentivizes the push towards low p values and an over-claiming of the significance of a specific experiment. 

Which brings us on to the replication crisis in economics and other disciplines. A knee-jerk reaction to the crisis is to say we need ever-bigger sample sizes. But, that kind of misses the point. A particular experiment is only one data point because it is run with a specific subject pool using a specific protocol. Adding more subjects does not solve that. Instead we need replication with different subject pools under different protocols - the slow accumulation of knowledge. And we need to carefully document research protocols.

My anecdotal impression is that journal editors and referees are upping the ante on how many subjects it takes to get an experiment published (without moving things forward much in terms of documenting protocols). To put that theory to a not-at-all scientific test I have compared the papers that appeared in the journal Experimental Economics in its first year (1998) and most recent edition (March 2018). Let me emphasize that the numbers here are rough-and-ready and may well have several inaccuracies. If anyone wants to do a more scientific comparison I would be very keen to see it. 

Anyway, what do we find? In 1998 the average number of subjects was 187, which includes the study of Cubbit, Starmer and Sugden where half the population of Norwich seemingly took part. In 2018 the average is 383. So, we see an increase. Indeed, only the studies of Cubbit et al. and Isaac and Walker are above the minimum in 2018. The number of observations per treatment are also notably higher in 2018 at 46 compared to 1998 when it was 16. Again, those numbers are almost certainly wrong (for instance the number of independent observations in Kirchler and Palan is open to interpretation). The direction of travel, though, seems clear enough. (It is also noticeable that around half of the papers in 1998 were survey papers or papers reinterpreting old data sets. Not in 2018.)  

At face value we should surely welcome an increase in the number of observations? Yes, but only if it does not come at the expense of other things. First we need to still encourage replication and the accumulation of knowledge. Experiments with a small number of subjects can still be useful. And, we also do not want to create barriers to entry. At the top labs running an experiment is relatively simple - the money, subject pool, programmers, lab assistants, expertise etc. are there and waiting. For others it is not so simple. The more constraints we impose for an experiment to count as 'well-run' the more experimental economics may potentially become 'controlled' by the big labs. If nothing else, that poses a potential problem in terms of variation in subject pool. Big is, therefore, not necessarily better. 

Sunday, 28 January 2018

Would you want to be an expected utility maximizer

I have finally got around to reading Richard Thaler's fantastically wonderful book on Misbehaving. One thing that surprised me in the early chapters is how Thaler backs expected utility theory as the right way to think. Deviations from expected utility are then interpreted as humans not behaving 'as they should'. While I am familiar with this basic argument it still came as a surprise to me how firmly Thaler backed expected utility theory. And I'm not sure I buy this argument. 

To appreciate the issue consider some thought experiments. Thaler gives the following example:

Stanley mows his lawn every weekend and it gives him terrible hay fever. I ask Stan why he doesn't hire a kid to mow his lawn. Stan says he doesn't want to pay the $10. I ask Stan whether he would mow his neighbor's lawn for $20 and Stan says no, of course not.

From the point of view of expected utility theory Stan's behavior makes no sense. What we should do is calculate the recompense Stan needs to mow a lawn. That he will not pay the kid shows that he values $10 more than mowing a lawn. That he will not mow his neighbor's lawn shows that he does not value $20 more than mowing his lawn. But how can mowing a lawn be worth less than $10 and more than $20. It cannot! His choice makes no sense.

Here is another example (which I have adapted a bit):

Morgan got free tickets to a high profile NBA basketball game. Tickets are selling on the legal second hand market for $300. Morgan says that he is definitely going to the game. Asked if he would pay $200 for a ticket he says 'of course not'.

Again, from the point of expected utility theory Morgan's choices are nonsense. What we should do here is ask how much he values going to the game. He explicitly says that he would not be willing to pay $200. Yet by going to the game he misses out on selling his tickets for $300. So, it looks as though he values the game more than $300. But how can going to the game be worth less than $200 and more than $300. It cannot! His choices also make no sense. 

What to do with these two examples. The key question is this: Do you think Stanley and Morgan would change their choices if the 'irrationality' of their choices are explained to them? Personally, I think not. To them their behavior probably seems perfectly sensible, and who are we to argue against that? One response to this would be to 'add on' factors that influence preferences such as 'I prefer mowing my own lawn' or 'I don't like giving away tickets'. This can rescue expected utility theory but is precisely the kind of ad-hocness that behavioral economics is trying to get away from. So, I think it is just better to accept that expected utility is not necessarily the right way to make decisions.

Does this matter? Mainstream economics is built upon the premise that expected utility theory is a good representation of how people make decisions. The work of Thaler and others has blown that idea out of the water. So, whether or not expected utility is the right way to do things is rather a mute point. There is still merit in learning how a person should behave if their preferences satisfy certain basic axioms.  

Things are more complicated when we look at the heuristics and biases approach led by Daniel Kahneman and Amos Tversky. Here the biases suggests that there is a right way to do things! It is worth clarifying, however, that much of this work relates to probabilistic reasoning where there is a clearly defined right way of doing things. I suggest that we just have to be a bit careful extending the biases terminology to choice settings where there may not be a right way of doing things. For instance, is loss aversion a bias? Put another way, is it irrational to behave in one way when things are framed as losses and another way when the same choice is framed in terms of gains? It is certainly different to how economists have traditionally modeled things. But it still seems perfectly sensible to me (and may make evolutionary sense) that someone could be influenced by the frame. Maybe, therefore, we need to call it the status-quo-effect rather than status-quo-bias. This, though, is surely more of a matter of semantics rather than anything particularly substantive.

Things are also a bit complicated when we come to policy interventions. For instance, the nudge idea kind of suggests there is a right way of doing things that we can nudge people towards. But then there are so many situations where people make unambiguously bad choices (like saving for retirement) that we can still be excited about nudge without getting too caught up on whether expected utility theory is always the best way to make decisions. And remember the basic idea behind nudge is that we should never restrict choice anyway.

So, whether or not it is rational to maximize expected utility is probably not a big issue. It just means, to use Thaler's well known terminology, that: Humans may not be quite as dumb as Thaler claims, but they are undeniably very different to the Econs you find in a standard textbook.

Saturday, 30 December 2017

Behavioral economics or experimental economics

My holiday reading started with the book Behavioral Economics: A History by Floris Heukelom. The book provides a interesting take on how behavioral economics has grown from humble beginnings to the huge phenomenon that it now is. A nice review of the book has been written by Andreas Ortmann and so I will not delve too deeply into general comment here, other than to say I enjoyed reading the book. 

But in terms of more specific comment, one theme running throughout the book is the distinction between behavioral economics and experimental economics. Heukelom makes clear that he thinks there is a very sharp distinction between these two fields. Personally I have always thought of them both as part of one big entangled blob. There are people who clearly prefer to label themselves a behavioral economist or an experimental economist but this seemed to me more a matter of personal preference than any grand design. So, what is the difference between behavioral and experimental economics?

Heukelom's viewpoint is based on a very narrow definition of experimental economics and behavioral economics. Specifically, he associates experimental economics with the work of Vernon Smith on experimental asset markets and he associates behavioral economics with the work of Kahneman, Tversky and Thaler, particularly with regard to prospect theory. The gap between research on market efficiency in the lab and that on prospect theory is indeed large. For instance, the first is more focused on market institutions and ecological rationality (i.e. how do markets work) while the later is focused on individual decision making and individual rationality (i.e. how do people behave). So, here a neat dividing line does potentially exist.

The problem with this view is that experimental asset markets are, and long have been, only one small part of work that must surely fall under the umbrella of experimental economics. (See, for instance, the short summary on the early years of experimental economics by Alvin Roth.) Similarly, prospect theory is only one small part of work that must fall under the umbrella of behavioral economics. For example, one elephant in the room here is game theory. From its very beginnings game theory has had an experimental side which has grown alongside work on markets. For instance, experiments with the prisoners dilemma and social dilemmas more generally began in the 1950s, if not before, and are generally seen as a big part of experimental economics. Similarly, a big part of behavioral economics has been to understand social preferences and move away from the standard economic assumption of selfishness. Indeed, the dictator game, which is now a mainstay of experimental economics, was first used by Kanheman, Knetsch and Thaler in a paper published in 1986.

In short, everything is mixed up. Other ways of trying to find a neat dividing line between behavioral and experimental economics would also seem doomed to end up with a mess. For instance, at the end of the book Heukelom associates modern behavioral economics with the use of mathematical methods. But that would seemingly exclude a host of behavioral economics, Dan Ariely to name just one, whose work is not particularly characterized by the use of mathematics. Similarly, experimental economists, like Robert Sugden and Chris Starmer, have been prominent in recent developments in prospect theory.   

This is not to say that experimental and behavioral economics are the same. Experimental economics is characterized by a method of doing things - namely experiments - while behavioral economics (although much harder to tie down) is more characterized by an objective to understand how people reason in economic contexts. The trouble is it is hard to see how the one can be done without the other. Pushed to the limits it may be possible to study experimental markets without being bothered with individual behavior. Or to work on individual behavior without recourse to lab or field experiments. The truth, though, is surely that the two go very much hand in hand and, given that we are talking about the history of behavioral economics, always have done.   

An interesting question is how things will develop in the future. Both the terms experimental and behavioral economics are essentially referring to methods. In the infancy of something like experimental economics it is natural that someone doing experiments would use a label like experimental economics to distinguish what they are doing. But the more routine it becomes for the 'average' economist to use experiments or draw on behavioral theory the less relevant the labels would seem to be. Instead we could gravitate towards a focus on applications with more use of the labels like public, labor and development economics. Behavioral economics is, though, presumably too much of a buzz phrase for that to happen any time soon. 

Wednesday, 8 November 2017

Rank dependent expected utility

Prospect theory is most well known for its assumption that gains are treated differently to losses. Another crucial part of the theory, namely that probabilities are weighted, typically attracts much less attention. Recent evidence, however, is suggesting that probability weighting has a crucial role to play in many applied settings. So, what is probability weighting and why does it matter?

The basic idea of probability weighting is that people tend to overestimate the likelihood of events that happen with small probability and underestimate the likelihood of events that happen with medium to large probability. In their famous paper on 'Advances in prospect theory', Amos Tversky and Daniel Kahneman quantified this effect. They fitted experiment data to equation

where γ is a parameter to be estimated. In interpretation, p is the actual probability and π(p) the weighted probability. The figure below summarizes the kind of effect you get. Tversky and Kahneman found that a value of γ around 0.61 best matched the data. This means that something which happens with probability 0.1 gets a decision weight of around 0.2 (overweighting of small probabilities) while something that happens with probability 0.5 gets a decision weight of only around 0.4 (underweighting of medium to large probabilities).  

Why we observe this warping of probabilities is unclear. But the consequences for choice can be important. To see why consider someone deciding whether to take on a gamble. Their choice is either to accept £10 for certain or gamble and have a 10% chance of winning £90 and a 90% chance of winning nothing. The expected value of this gamble is 0.1 x 90 = £9. So, it does not look like a good deal. But, if someone converts a 10% probability into a decision weight of 0.2 we get value 0.2 x 90 = £18. Suddenly the gamble looks great! Which might explain the appeal of lottery tickets.

There is, though, a problem. It is not enough to simple weight all probabilities. This, as I will shortly explain, doesn't work. So, we need some kind of trick. While prospect theory was around in 1979 it was not until the early 1990's that the trick was found. That trick is rank dependent weighting. The gap of over 10 years in finding a way to deal with probabilities may help explain why probability weighting has had to play second fiddle to loss aversion. Lets, though, focus on the technical details.

Consider the example. Here there are no obvious problems if we just weight probabilities. The 10% chance of winning is converted into a 0.2 decision weight while the 90% chance of losing is converted into a 0.7 decision weight. The overall expected value is then 0.2 x £90 = £18. Everything looks fine.

So, consider another example. Suppose that the sure £10 is now a gamble with a 10% chance of winning £10.09, a 10% chance of winning £10.08, a 10% chance of winning £10.07, and so on, down to a 10% chance of winning £10. If we just simply weight all these 10% probabilities as 0.2 then we get expected value of 0.2 x 10.09 + 0.2 x 10.08 + ... + 0.2 x 10 = £20.09. This is absurd. A gamble that essentially gives £10 cannot be worth over £20! You might say that the problem here is we have ended up with a combined weight of 2. If, though, we normalize weights to 1 we will not have captured the over-weighting of small probabilities. So, normalizing is not, of itself, a solution. 

The problem with the preceding approach is that we have weighted everything - good or bad - by the same amount. Rank dependent probability does away with that. Here we rank outcomes from best to worst. The decision weight we place on an outcome is then the weighted probability of the outcome or something better minus the weighted probability of something better

In our original gamble the best outcome is £90 and the worst is £0. The weight we put on £90 is around 0.2 because there is 10% chance of £90, no chance of anything better, and a 10% probability is given weight 0.2. The weight we put on £0 is 0.8 because it is the weighted probability of £0 or better, namely 1, minus the weighted probability of £90, namely 0.2. So, not much changes in this example.

In the £10 gamble the best outcome is £10.09, the next best £10.08, and so on. The decision weight we but on £10.09 is around 0.2 because there is a 10% chance of £10.09 and no chance of anything better. Crucially, the weight we put on £10.08 is only around 0.1 because we have the weighted probability of £10.08 or better, a 20% chance that gives weight around 0.3, minus the weighted probability of £10.09, around 0.2. You can verify that the chance of winning £10.07, £10.06 and so on has an even lower decision weight. Indeed, decision weights have to add to 1 and so the high weight on £10.09 is compensated by a lower weight on other outcomes. For completeness the table below gives the exact weights you would get with the Tversky and Kahneman parameters. Given that decision weights have to add to 1 the expected value is going to be around £10. Common sense restored!

Generally speaking, rank dependent weighting means that we capture, and only capture, over-weighting of the extreme outcomes. So, we capture the fact a person may be overly optimistic about winning £90 rather than £0 without picking up the perverse prediction that every unlikely event is over-weighted. The discussion so far has focused on gains but we can do the same thing with losses. Here we want to capture, and only capture, over-weighting of the worst outcomes. 

So why does all this matter? There is mounting evidence that weighting of probabilities can explain a lot of behavior, including the equity premium puzzle, long shot bias in betting and willingness of households to buy insurance at highly unfavorable premiums. For a review of the evidence see the article by Helga Fehr-Duda and Thomas Epper on 'Probability and risk: Foundations and economic implications of probability-dependent risk preferences'. It is easy to see, for instance, why overweighting of small probabilities could have potentially profound implications for someone's view of insurance. A very small probability of loss may be given a much higher decision weight. That makes insurance look like a good deal.  

Tuesday, 10 October 2017

Richard Thaler and the Nobel Prize for behavioral economics

Officially, Richard Thaler won the Nobel Prize in Economics because he 'has incorporated psychologically realistic assumptions into analyses of economic decision-making. By exploring the consequences of limited rationality, social preferences, and lack of self-control, he has shown how these human traits systematically affect individual decisions as well as market outcomes'. 

An interesting thing about this quote is that nudge doesn't get a mention; indeed, it only just about scrapes it into the Academy's official press release. (In the more detailed popular information document it doesn't appear until page 5 of 6.) This is in stark contrast to the popular press: the BBC leads with 'Nudge' economist wins Nobel Prize, the Telegraph leads with 'Nudge' guru wins the Nobel Prize, and so on. To read the papers you would think that Nudge is all there is to it.

There is no doubt that Nudge has been a huge success and made Thaler famous (at least by economist standards). In terms of the Nobel prize, however, it is important to recognize that Nudge is just one of the many, many contributions Thaler has made to economics, and behavioral economics. Let me pick up three of those contributions here.

1. Thaler showed how dumb people can be when making economic decisions. The likes of Herbert Simon, Amos Tversky and Daniel Kahneman paved the way by showing that people can make decisions that are inconsistent with the standard way economists think about things. They, though, typically considered settings that are pretty complex, such as, search, choice with risk or how to interpret information. Thaler took this one stage further and showed that even for the most basic of economic decisions the standard economic model can go astray. 

Consider, by way of illustration the following example, from the classic paper on 'mental accounting and consumer choice':

Mr. S admires a $125 cashmere sweater at the department store. He declines to buy it, feeling that it is too extravagant. Later that month he receives the same sweater from his wife for a birthday present. He is very happy. Mr. and Mrs. S have only joint bank accounts.  

Standard economic theory says that the sweater is either worth $125 or not. But, there seems nothing extraordinary about Mr. S's behavior. To provide a framework within which to make sense of this, and much else, Thaler introduced the notion of mental accounting where we code gains and losses, evaluate purchases and observe budgetary rules. Mr. S would be breaking self-imposed rules to spend $125 from his 'everyday account' but an occasional gift funded from the 'gift account' is to be enjoyed. 

Once we see how easily the framing of a choice can influence behavior it is a relatively short step to Nudge and the idea that framing can be used to positively change behavior. (Crucial in this is also the recognition that people can have self-control problems.)  

2. As well as dumb, people can also be nice, and not so nice. In many ways economists have clung to the notion of selfishness for much longer than that of rationality. Work by Thaler helped turn the tide. Two papers with Daniel Kahneman and Jack Knetsch on 'Fairness as a constraint on profit taking' and 'Fairness and the assumptions of economics' are particularly noteworthy. In the first paper we get a series of questions like the following:

A hardware store has been selling snow shovels for $15. The morning after a large snowstorm, the store raises the price to $20. Please rate this actions as: Completely fair, acceptable, unfair, very unfair.

82% of subjects considered it unfair. Presumably that means they may decide not to buy the snow shovel; fairness matters. In the second paper we get some big advances in the studying of the ultimatum game (first use of strategy method to look at willingness to reject and first look at willingness of a third party to punish) and we see the dictator game for the first time. This may sound a bit technical but it was part of opening up the whole debate on how fairness works and can be modeled by economists.

3. Popularization is not the kind of thing that wins Nobel prizes, but it can be important in driving things forward. In a series of articles published in the Journal of Economic Perspectives (and subsequently turned into the book The Winner's Curse) Thaler and co-authors set out some of the key insights of behavioral economics. I will quote in full the introduction to one of the articles:

Economics can be distinguished from other social sciences by the belief that most (all?) behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences in markets that (eventually) clear. An empirical result qualifies as an anomaly if it is difficult to "rationalize," or if implausible assumptions are necessary to explain it within the paradigm. This column presents a series of such anomalies. Readers are invited to suggest topics for future columns by sending a note with some reference to (or better yet copies of) the relevant research. Comments on anomalies printed here are also welcome. After this issue, the "Anomalies" column will no longer appear in every issue and instead will appear occasionally, when a pressing anomaly crosses Dick Thaler's desk. However, suggestions for new columns and comments on old ones are still welcome. Thaler would like to quash one rumor before it gets started, namely that he is cutting back because he has run out of anomalies. Au contraire, it is the dilemma of choosing which juicy anomaly to discuss that takes so much time.

The interesting thing about this is the target audience. This is about trying to convince economists that behavioral economics matters and should be taken seriously. That is a very hard sell indeed! But ultimately it seems to have worked.

With any Nobel prize there are going to be the critics. And I can already hear some grumbles. But, that seems to come more from ignorance than judgement. If we take Nudge out of the equation the contributions of Thaler are clear enough. With Nudge there is undeniably a lot of hyperbole from some policy makers and consultants. The undeniable truth, however, is that it has made a positive difference to policy making. That is worth celebrating. 

Thursday, 7 September 2017

Honesty around the world

In my last post I looked at dishonesty in the banking industry. Sticking with a similar theme, this time I will at dishonesty across different countries.
       Let us start with a study by David Pascual-Ezama and a long list of co-authors on 'Context dependent cheating: Experimental evidence from 16 countries'. They asked 90 students in 16 different countries to perform a very simple task: toss a black and white coin and record the outcome. If the coin came up white the student obtained a red Lindt Lindor Truffle. If it came up black they got nothing. Crucially, the coin toss took place in private and so the student could report whatever outcome they wanted. If they wanted a chocolate then they simply had to report white. (The study contrasted three different methods of reporting - form put in a box, form given to the experimenter or verbally telling the experimenter - but I will skip those details here.)
          The chart below summarizes the country wide outcomes by focusing on the proportion of the 90 students in each country that 'won' the chocolate. The blue bars give the distribution we would predict if the students reported honestly. As you would expect the distribution is centered on a 50-50 success rate. Compared to this benchmark students were remarkably lucky. In all countries more than 50% of students won the chocolate and in some, such as Spain, the success rate was much higher than seems plausible. So, some students were dishonest (and hungry). Note, however, that the success rates are nowhere near the 100% we would expect if all students lied. So, many students were honest (or not so hungry). Indeed, we could conclude that most students were honest. There is also no compelling evidence of differences across countries. Spaniards won more than Danes but then someone has to come top and someone bottom. The differences we see here are not particularly large.  

Consider next a study by David Hugh-Jones on 'Honesty, beliefs about honesty, and economic growth in 15 countries'. In this case the subject pool in each country was a sample of the general population selected by a survey company and the prize was either $3 or $5 and not a chocolate. (The study also involved other measures of dishonesty and beliefs about dishonesty but I'll skip those here.) The findings are summarized in the next figure. The main thing to note is that we get a big swing to the right in those who 'won'. In other words there was a lot more dishonesty in this study. Moreover, the amount of dishonesty significantly varied across countries. Just how much we can read into this variation is not clear. For instance, the US and Canada come out as relatively dishonesty but that may reflect a willingness to 'game' the experiment rather than a predisposition to dishonesty in general life. Even so, it is shown that honesty correlates with GDP per capita and the proportion of the population that is protestant. This hints at cultural roots of honesty.

Which brings us to the final study I will mention, by Simon Gachter and Jonathon Schultz on 'Intrinsic honesty and the prevalence of rule violations across countries'. In this study students from 23 countries were asked to roll a six sided dice and report the outcome. Reporting a 1 earned 1 unit of payment (e.g. £0.50 in the UK), a 2 earned 2 units and so on up to 5 which earned 5 units, but reporting a 6 earned 0. Note that in this experiment a subject can lie 'a little' by say reporting 4 instead of 2 or lie 'a lot' by reporting 5 instead of 6. If subjects were honest the expected payment would be 2.5.  If they lied a lot the payment would be 5. As the figure below shows average payments were well above 2.5 and so there is evidence of dishonesty. Note, however, that payments were well below 5 and so there is, again, lots of honesty as well.

Cross country differences are not partly stark in the figure above. But another thing to consider is the proportion of subjects who reported a 6. Recall that this meant a payoff of 0 and so there was a strong incentive to lie 'a little' and get some payoff. (Indeed, to not report a 6 would seem analogous to miss-reporting the toss of a coin.) If subjects were honest around 16% should get 0. As the figure below shows in some countries, like Germany, subjects were very honest but in others, like Tanzania, they were not. And evidence for differences across countries is pretty strong. Overall, it is shown that cross country differences correlate strongly with an index of the prevalence of rule violations which captures things like corruption, tax evasion and fraudulent politics. This again points the finger at culture but also brings in the related issue of institutions. Countries with weak institutions see more dishonesty.

So, what to take from all this? One message I would take is that people are, on average, very honest. In all the three studies I have discussed there was more subjects behaved honestly than dishonestly. And let's recall that it was pretty easy for a subject to lie in these studies, both in a practical sense - they just needed to misreport - and in a moral sense - this was not robbing money from an old lady. It seems, therefore, that people the world over are pretty honest. But, that does not mean that dishonesty is not a problem. In my last post we saw that culture in the banking industry might encourage dishonesty. Here we see that culture in society might lead to greater dishonesty. A little bit of dishonesty can have large negative economic consequences.  

Sunday, 27 August 2017

Culture and dishonesty in banking

The film 'A Good Year' starts with a ruthless financial trader called Max, played by Russell Crowe, manipulating bond markets in order to out-maneuver his competitors and make a quick, big profit. But, by the end of the film Max has decided to pack it all in and live out a more fulfilling life in rural France. Could that happen? Can someone really transition from a ruthless, selfish trader to a compassionate, loving family man in the space of a few days?
        A study by Alain Cohn, Ernst Fehr and Michel Marechal, publisehd in 2014 in Nature, suggests it might be possible. They used a standard coin tossing task to measure the dishonesty of 128 employees from a large, international bank. The task works as follows: A subject is asked to toss a coin 10 times and record whether the outcome was heads or tails. Depending on the outcome the subject can win $20 per toss. The crucial thing to know is that the subject records whether or not they won for each toss and there is no way for the experimenter to verify if the outcome is recorded correctly. So, the subject fills in the following table privately. This means a subject could 'easily' lie and walk away with $200.

        The crucial twist in the experiment was to vary the priming subjects faced before performing the coin-tossing task. Roughly half of the subjects were asked questions related to their work in the bank - Why did you decide to become a bank employee? What are the three major advantages of your occupation as a bank employee? Which three characteristics of your personality do you think are typical for a bank employee? etc. The other half of the subjects were asked questions not related to their work - What is your favorite leisure activity? Where did you spend your last vacation? Which three things did you like most about your last vacation? etc. 
        So, to the results. The figure below shows what happens for subjects not primed to think about work in the bank. The blue bars show the observed distribution of earnings and the green bars show the distribution of earnings expected by pure chance. We can see some hints of dishonesty - there are fewer than we would expect getting $40 or less and more getting $200. But, these are small things. The overall picture is that the bankers were honest.   

         Things change when subjects were primed to think about work in the bank. The distributions are shown below. Here we see a sizable increase in the amount of money being claimed. Needless to say, this is highly unlikely to be due to chance. It can be estimated that around 26% of subjects were dishonesty. Let us keep in perspective that this means 74% were honest. Even so, the headline result is that bankers only exhibit dishonesty when they are primed to think about banking.

        This finding feeds into a general debate about whether dishonesty is a personal trait or a product of culture. The results we have looked at here suggest that dishonesty has a large cultural component. That would make it more likely a banker can be ruthless in his job and then help old ladies across the road in his spare time. It is hard to imagine, however, that culture is the only factor at work here because we do know that there are reliable personal differences in dishonesty and willingness to cooperate. It is surely not by chance that some become investment bankers and others pediatricians? An interesting and closely related debate is whether studying economics makes people more selfish (culture at play) or whether more selfish people choose to study economics (personal traits at play). An article by Adam Grant provides a nice overview of the issues.