Skip to main content

How to create (dis)honesty in the experimenal lab

The willingness of skiers to leave their skis lying about outside of mountain restaurants has always intrigued me, even as a young child. Skis are expensive, there is a vibrant second hard market, and given that most skis look similar it would be easy enough to do the 'sorry I thought they were mine' trick. It would just seem so straightforward to 'make a business' out of stealing skis! Yet skiers still leave their skis lying about. And in Switzerland they seem to leave everything about - helmet, boots, rucksack. It is the equivalent of students at the university canteen leaving all their i-phones, laptops etc. laying about outside (without any passwords).
          That skiers are trusting, and seemingly deserving of that trust is wonderful. It frees everyone up to enjoy the mountains rather than worry how to padlock their skis to a slopestyle rail. Our willingness to trust is, though, a real challenge to standard economic theory. The potential gain from stealing a pair of skis is positive for pretty much any level of risk aversion. So, we should all be out stealing skis. But we're not! Why?
          The issue of lying and deception has largely gone under the radar of economists for a long, long while. Fortunately, the work of Dan Ariely and others has done a lot to change that in recent years. And dishonesty is now something of a hot topic. For instance, the recent Nature article by Alain Cohn, Ernst Fehr and Michel Marechal on dishonesty in business culture got a lot of media coverage. But, many intriguing questions about honesty and dishonesty remain. And I'm not convinced the current methods used to measure (dis)honesty are as good as they might be.
        To help explain my scepticism it is useful to take a step back and look at a study on dishonesty I did with a student Matheus Menezes, published in Economics Letters. We were interested in whether competition increases dishonesty. Everyone seemed to assume it should but we demonstrated it need not - either in theory or practice. Basically, in a very competitive environment there is little chance of winning even if you cheat and so there is little point in cheating. This gives rise to an inverse U shaped relationship where cheating is highest for intermediate levels of competition.
         In our experimental study we used a standard way of measuring (dis)honesty: We got the subjects to self report how many answers they got correct on a multiple choice general knowledge quiz. Because subjects self report they can cheat 'undetected' by falsifying the number of correct answers. Because we know roughly the probability of people knowing the correct answer we can detect cheating at an aggregate level. Most studies of dishonesty follow this approach, potentially substituting our quiz for some other random device like rolling a dice and counting sixes.
         In a standard set up, like that of Alain Cohn and co-authors, subjects get paid for each 'correct' answer. Clearly this provides an incentive to exaggerate the number of correct answers. But, how good a measure of dishonesty is that? By cheating the subject takes money off the experimenter. And the experimenter has deliberately set things up this way. The subject, therefore, may consider it fair game to cheat. Taking money off an experimenter who 'asks you to do it' is a world away from robbing an old lady.
          The set-up we used in our study was theoretically immune from this criticism. In our case a set number of people would win a prize. For example, in one treatment the two subjects with the most correct answers got a prize (with a random device to deal with ties). In this case a subject who cheated did not take money off us. Instead they took money off another subject. For example, the subject with the most correct answers denies the subject with the third most correct answers a prize. Even so, to get a prize it was pretty much essential to cheat and so we end with cheats taking money off other cheats. Again, this is a world away from robbing an old lady.
          We know the standard measure of (dis)honesty must be picking up something because our study and others has found strong treatment effects - willingness to cheat depends on incentives. I'm sceptical, however, how much of this is down to dishonesty. That scepticism stems largely from a conversation I had with one of the subjects after our study. He was openly willing to admit he had cheated because 'that was the game'. This is a person I would trust to be an honest, cooperative type. His willingness to say he had lied presumably testifies to that! With this in mind, I think the standard measure of dishonesty  may be picking up a general willingness 'to play the game'. And  that is likely to correlate with intelligence just as much as dishonesty.
          So, I would suggest there is work left to be done before we can convincingly say we have captured dishonesty in the lab. And until we do that it will be hard to know why most people, including skiers, are such an honest bunch.

 

Comments

Popular posts from this blog

Revealed preference, WARP, SARP and GARP

The basic idea behind revealed preference is incredibly simple: we try to infer something useful about a person's preferences by observing the choices they make. The topic, however, confuses many a student and academic alike, particularly when we get on to WARP, SARP and GARP. So, let us see if we can make some sense of it all.           In trying to explain revealed preference I want to draw on a  study  by James Andreoni and John Miller published in Econometrica . They look at people's willingness to share money with another person. Specifically subjects were given questions like:  Q1. Divide 60 tokens: Hold _____ at $1 each and Pass _____ at $1 each.  In this case there were 60 tokens to split and each token was worth $1. So, for example, if they held 40 tokens and passed 20 then they would get $40 and the other person $20. Consider another question: Q2. D...

Measuring risk aversion the Holt and Laury way

Attitudes to risk are a key ingredient in most economic decision making. It is vital, therefore, that we have some understanding of the distribution of risk preferences in the population. And ideally we need a simple way of eliciting risk preferences that can be used in the lab or field. Charles Holt and Susan Laury set out one way of doing in this in their 2002 paper ' Risk aversion and incentive effects '. While plenty of other ways of measuring risk aversion have been devised over the years I think it is safe to say that the Holt and Laury approach is the most commonly used (as the near 4000 citations to their paper testifies).           The basic approach taken by Holt and Laury is to offer an individual 10 choices like those in the table below. For each of the 10 choices the individual has to go for option A or option B. Most people go for option A in choice 1. And everyone should go for option B in choice 10. At some point, therefore, we expect the...

Nash bargaining solution

Following the tragic death of John Nash in May I thought it would be good to explain some of his main contributions to game theory. Where better to start than the Nash bargaining solution. This is surely one of the most beautiful results in game theory and was completely unprecedented. All the more remarkable that Nash came up with the idea at the start of his graduate studies!          The Nash solution is a 'solution' to a two-person bargaining problem . To illustrate, suppose we have Adam and Beth bargaining over how to split some surplus. If they fail to reach agreement they get payoffs €a and €b respectively. The pair (a, b) is called the disagreement point . If they agree then they can achieve any pair of payoffs within some set F of feasible payoff points . I'll give some examples later. For the problem to be interesting we need there to be some point (A, B) in F such that A > a and B > b. In...