Skip to main content

How to create (dis)honesty in the experimenal lab

The willingness of skiers to leave their skis lying about outside of mountain restaurants has always intrigued me, even as a young child. Skis are expensive, there is a vibrant second hard market, and given that most skis look similar it would be easy enough to do the 'sorry I thought they were mine' trick. It would just seem so straightforward to 'make a business' out of stealing skis! Yet skiers still leave their skis lying about. And in Switzerland they seem to leave everything about - helmet, boots, rucksack. It is the equivalent of students at the university canteen leaving all their i-phones, laptops etc. laying about outside (without any passwords).
          That skiers are trusting, and seemingly deserving of that trust is wonderful. It frees everyone up to enjoy the mountains rather than worry how to padlock their skis to a slopestyle rail. Our willingness to trust is, though, a real challenge to standard economic theory. The potential gain from stealing a pair of skis is positive for pretty much any level of risk aversion. So, we should all be out stealing skis. But we're not! Why?
          The issue of lying and deception has largely gone under the radar of economists for a long, long while. Fortunately, the work of Dan Ariely and others has done a lot to change that in recent years. And dishonesty is now something of a hot topic. For instance, the recent Nature article by Alain Cohn, Ernst Fehr and Michel Marechal on dishonesty in business culture got a lot of media coverage. But, many intriguing questions about honesty and dishonesty remain. And I'm not convinced the current methods used to measure (dis)honesty are as good as they might be.
        To help explain my scepticism it is useful to take a step back and look at a study on dishonesty I did with a student Matheus Menezes, published in Economics Letters. We were interested in whether competition increases dishonesty. Everyone seemed to assume it should but we demonstrated it need not - either in theory or practice. Basically, in a very competitive environment there is little chance of winning even if you cheat and so there is little point in cheating. This gives rise to an inverse U shaped relationship where cheating is highest for intermediate levels of competition.
         In our experimental study we used a standard way of measuring (dis)honesty: We got the subjects to self report how many answers they got correct on a multiple choice general knowledge quiz. Because subjects self report they can cheat 'undetected' by falsifying the number of correct answers. Because we know roughly the probability of people knowing the correct answer we can detect cheating at an aggregate level. Most studies of dishonesty follow this approach, potentially substituting our quiz for some other random device like rolling a dice and counting sixes.
         In a standard set up, like that of Alain Cohn and co-authors, subjects get paid for each 'correct' answer. Clearly this provides an incentive to exaggerate the number of correct answers. But, how good a measure of dishonesty is that? By cheating the subject takes money off the experimenter. And the experimenter has deliberately set things up this way. The subject, therefore, may consider it fair game to cheat. Taking money off an experimenter who 'asks you to do it' is a world away from robbing an old lady.
          The set-up we used in our study was theoretically immune from this criticism. In our case a set number of people would win a prize. For example, in one treatment the two subjects with the most correct answers got a prize (with a random device to deal with ties). In this case a subject who cheated did not take money off us. Instead they took money off another subject. For example, the subject with the most correct answers denies the subject with the third most correct answers a prize. Even so, to get a prize it was pretty much essential to cheat and so we end with cheats taking money off other cheats. Again, this is a world away from robbing an old lady.
          We know the standard measure of (dis)honesty must be picking up something because our study and others has found strong treatment effects - willingness to cheat depends on incentives. I'm sceptical, however, how much of this is down to dishonesty. That scepticism stems largely from a conversation I had with one of the subjects after our study. He was openly willing to admit he had cheated because 'that was the game'. This is a person I would trust to be an honest, cooperative type. His willingness to say he had lied presumably testifies to that! With this in mind, I think the standard measure of dishonesty  may be picking up a general willingness 'to play the game'. And  that is likely to correlate with intelligence just as much as dishonesty.
          So, I would suggest there is work left to be done before we can convincingly say we have captured dishonesty in the lab. And until we do that it will be hard to know why most people, including skiers, are such an honest bunch.

 

Comments

Popular posts from this blog

Revealed preference, WARP, SARP and GARP

The basic idea behind revealed preference is incredibly simple: we try to infer something useful about a person's preferences by observing the choices they make. The topic, however, confuses many a student and academic alike, particularly when we get on to WARP, SARP and GARP. So, let us see if we can make some sense of it all.           In trying to explain revealed preference I want to draw on a  study  by James Andreoni and John Miller published in Econometrica . They look at people's willingness to share money with another person. Specifically subjects were given questions like:  Q1. Divide 60 tokens: Hold _____ at $1 each and Pass _____ at $1 each.  In this case there were 60 tokens to split and each token was worth $1. So, for example, if they held 40 tokens and passed 20 then they would get $40 and the other person $20. Consider another question: Q2. D...

Nash bargaining solution

Following the tragic death of John Nash in May I thought it would be good to explain some of his main contributions to game theory. Where better to start than the Nash bargaining solution. This is surely one of the most beautiful results in game theory and was completely unprecedented. All the more remarkable that Nash came up with the idea at the start of his graduate studies!          The Nash solution is a 'solution' to a two-person bargaining problem . To illustrate, suppose we have Adam and Beth bargaining over how to split some surplus. If they fail to reach agreement they get payoffs €a and €b respectively. The pair (a, b) is called the disagreement point . If they agree then they can achieve any pair of payoffs within some set F of feasible payoff points . I'll give some examples later. For the problem to be interesting we need there to be some point (A, B) in F such that A > a and B > b. In...

Prisoners dilemma or stag hunt

Over Christmas I had chance to read The Stag Hunt and the Evolution of Social Structure by Brian Skyrms. A nice read, very interesting and thought provoking. There’s a couple of things in the book that prompt further discussion. The one I want to focus on in this post is the distinction between the stag hunt game and the prisoners dilemma game.    To be sure what we are talking about, here is a specific version of both type of game. Adam and Eve independently need to decide whether to cooperate or defect. The payoff matrix details their payoff for any combination of choices, where the first number is the payoff of Adam and the second number the payoff of Eve. For example, in the Prisoners Dilemma, if Adam cooperates and Eve defects then Adam gets 65 and Eve gets 165. Prisoners Dilemma Eve Cooperate Defect Adam Cooperate 140, 140 65, 165 Defect 165,...