Sunday, 12 May 2019

Some estimates of cross-price elasticity


The final part of this exciting trilogy is cross-price elasticity. (See here for estimates of own price and income elasticity.) Here we are looking for how demand for one product, say cars, is influenced by the price of another product, say petrol. The idea is to find a spread of examples from goods that are close substitutes (have cross price elasticity near 1) to strong complements (have an elasticity near -1).

Within the literature there are a lot more examples of substitutes, like cars and public transport, than of complements, like cars and petrol. Indeed, it was a bit of a struggle to find any complements. Here are the examples I converged on:



The book versus culture number is taken from the study by Ringstad and Loyland. The numbers for organic food are taken from the report by Bunte and co-authors on Dutch data. Those for alcohol are from a UK a study by Meng and co-authors.



For numbers of public transport in the UK there is a study by Paulley and co-authors.


The cocaine number is taken from the study by Petry. That brings us on to food. I was expecting to easily find numbers for food, and get some examples of complements. But my impression is that things have not really moved on much since the work of Angus Deaton in the 1980s. So, why not stick with those numbers, these taken from a study using Indonesian data.




Saturday, 11 May 2019

Some estimates of income elasticity of demand

My previous blog looked at estimates of own price elasticity of demand. Now the focus moves on to estimates of income elasticity of demand. In a sense income elasticity should be easier to measure than price elasticity of demand because there is more variation in income than price. But I actually found it a lot harder to come by income elasticicities in the literature.

And it was particularly difficult to get a nice spread of elasticities. Ideally we want some examples of luxury goods (with elasticity more than 1), normal goods (more than 0) and inferior goods (less than 0). The large majority of the examples I could find fitted in the 0.3-0.8 range. My rough interpretation of the literature is that 'simple' estimates tended to suggest things like eating out and health care were highly income elastic but more detailed work has lowered the numbers down.

It was also the case that goods you might think of as luxuries where not. This could just be a self-selection issue. For instance, organic food seems to be income inelastic for those that buy organic food. But, I'm guessing those that buy organic food are relatively well-off. Drug addiction seems another variant on this theme with a study by Petry finding that demand is income elastic for addicts (and 0 for non-addicts). Which is illustrates the more general point that one person's luxury may not be another's meaning it is difficult to find goods that are luxuries 'on average'.

Anyway, here are the numbers I settled on:



My earlier post had an income elasticity of Israelis going on vacation of 0.28. Yet a study by Maloney and Montes-Rojas puts the elasticity for going to the Caribbean at 2.02. These are not necessarily inconsistent if we think of the Caribbean as a luxury destination. So, going on holiday is normal but going somewhere further afield is luxury.

I have already mentioned the study by Petry where I got the cocaine number from. I will also mention a study by Celements and co authors that got an income elasticity of 1.3 for marijuana. Interestingly tobacco and alcohol come in a lot below 1 and so there is potentially an interesting story to be told there.

My search for luxury goods eventually paid off with a study of books by Ringstead and Loyland. They were using data from Norway before 2000 and so I'm not sure how representative that is of modern demand for books. But, it is easy to imagine that books are even more of a luxury good now that other sources of entertainment have a low marginal cost.  

A study by Costa-Font and co-authors review studies on health care and find an elasticity between 0.4 and 0.8. Crucially this means the health expenditure is not a luxury good - as many previously argued. A study of housing in Spain by Fernandez-Kranz and Hon - with plenty of comments on the literature - came up with a number between 0.7 and 0.95.

Some numbers on fuel consumption are provided in a review by Goodwin, Dargay and Hanly. For electricty use there is a German study by Schulte and Heindl that gives detailed estimates:



We then get into food. For an interesting discussion on measuring food elasticites the review by Cirera and Masset is recommended. But that does not give much by way of 'simple' numbers. For studies in Europe you can look at the references in my earlier post. A study by Kumar and co-authors gives some comparative numbers for India. You can see how income elasticity varies by income group meaning that Engel curves are definitely not straight lines. 


For food there is a lot more data out there. For instance a study by Kassali and co-authors looks at rice demand in Nigeria and gets similar numbers to that in India. This can then be compared to numbers in Europe. So, here the numbers are a bit more definitive, but also fairly consistent in showing that most food items are in the range 0.2-0.7. Finally, the number I give for organic food is taken from a study by Zhang and co-authors in the US. 




Monday, 6 May 2019

Some estimates of price elasticity of demand

In the textbook on Microeconomics and Behaviour with Bob Frank we have some tables giving examples of price, income and cross-price elasticities of demand. Given that most of the references are from the 70's I'm working on an update for the forthcoming 3rd edition. So, here is a brief overview of where the numbers come from for the table on price elasticity of demand. Suggestions for other good sources much appreciated.

Before we get into the numbers - the disclaimer. Price elasticities are tricky things to tie down. Suppose you want the price elasticity of demand for cars. This elasticity is likely to be different for rich or poor people, people living in the city or the countryside, people in France or Germany etc.etc. You then have to think if you want the elasticity for buying a car or using a car (which includes petrol, insurance and so on). So, there is no such thing as the price elasticity of demand for cars. Moreover, the estimated price elasticity will depend on the actual price in the market and so there is a tricky endogeneity problem. And, that's feeds into the question of how to actually estimate elasticity from the data. Even so, it is interesting, particularly as an educational tool, to get a feel which goods are elastic and inelastic 'on average'.

Here is the list I came up with, containing a range of goods from elastic to inelastic. Overall, though, most goods seemed to come out price inelastic. For more details on where these numbers come from see below.

For food it is easy enough (at least for the US) to get some numbers thanks to a study by Andreyeva, Long and Brownell. They reviewed 160 studies to come up with the following numbers. As we might expect eating out is most price elastic.  


Anything for the EU? A study by Bouamra-Machemache and co-authors gives some evidence on dairy consumption. Fortunately, the numbers in this study match pretty well those from the US. But it is interesting to note the big range in estimates. For instance, cheese can have an elasticity of anything between 1.33 and 0.15; which seems pretty much like saying 'it could be anything'.


A nice report by Bunte and co-authors looks in detail at organic food. First they give a review of the literature and then come up with some new estimates of their own for the Dutch market. We also have the comparison with non-organic good. Overall, we can see that organic food is a lot more price-elastic than non-organic food.



Next to alcohol where there is a review of 112 studies by Wagenaar, Salois and Komro. The figure below gives average elasticities from studies using aggregate level data. It is noticeable that demand seems relatively inelastic. 


So far not a single good is price elastic. Which is not too surprising for food and drink. Let us, therefore, go to the other extreme and look at some entertainment goods. 

A study by Ghose and Han looked at demand for mobile phone apps. They find a price elasticity of demand of -3.731 for Google Play and -1.973 for the Apple App Store. So, firmly in the category of elastic demand. In terms of broadband a study by Madden and Simpson with Australian data finds a mean elasticity of -0.121. A study by Galperin and Ruzzier finds estimates of -0.36 for OECD countries compared to -2.2 for Latin American and Caribbean countries. 

For football, a study by Forest, Simmons and Feehan gets an estimate of -0.74 in the Premier League. In terms of cinema, a study by de Roos and McKenzie in Australia found an elasticity of around -2.5 while a study by Dewenter and Westermann in Germany found a similar number of -2.25. A study of Finnish opera by Laamanen got a figure of -0.69 for premieres and -3.99 for reprises. While a German study for theatre got a figure of -0.27. Even these entertainment goods seem relatively price inelastic.

Finally, let us look at transport. A study by Paulley and coauthors provides a comprehensive review of public transport with a UK focus. As we might expect demand is relatively inelastic. Note the interesting short-run versus long-run comparisons. For instance, bus journeys become elastic (just) in the long run.


For non-public transport, a study by De Jong and Gunn reviews on evidence on fuel elasticity, with a focus on the EU. These are the most inelastic numbers we have seen so far. Which is probably not good news in terms of combating climate change.

Talking of climate change, for air-travel there is a meta-study by Brons and co-authors. Overall travel is price elastic but business travel is not; no surprises there.

Finally, a study by Fleischer, Peleg and Rivlin looks at demand for vacations (by Israelis). Perhaps surprisingly you can see that demand is price inelastic.









Friday, 26 April 2019

Is it ever optimal to play a mixed strategy?

In the early days of its (modern) history game theory focused a lot on zero-sum games. These are games in which total payoffs always add to zero no matter what the outcome. So, in a two player setting - your gain is my loss and vice-versa. It was arguably natural for game theory to focus on zero-sum games because they represent the epitome of conflict. The main reason the focus fell on such games is, however, more one of convenience  - zero-sum games have a solution.

This solution is captured by the minimax theorem and all that followed. Basically it amounts to saying that there is a unique way of playing a zero-sum game if all players want to maximize their payoff and are rational. Most games do not have a 'solution', because there are multiple Nash equilibria and so there is not an obvious correct way to play the game. In this sense zero-sum games are 'nice' or 'convenient'.

But does it make sense to behave according to the minimax theorem? The simple answer is no. This is because the theorem takes as given everyone is rational, and expects everyone to be rational. We know that in reality people are not rational, so why should you expect them to be. To illustrate the point consider a rock-scissors-paper game between Alice and Michael. The payoffs below are the payoffs of Alice.


The essence of the 'solution' for Alice is that her choice should not be predictable. And, in a sense, this seems hard to argue with. If Alice is predictable in say, choosing Rock then Michael can pick this up and choose Paper. He wins. So, the solution is for Alice to randomly choose what she does in each play of the game. If she chooses randomly then she is unpredictable by definition.

Randomization is good because it means Alice has a 50-50 chance of winning. But can Alice not do better? Seen in a different light randomization seems defeatist because it means Alice limits her ambitions to a 50-50 chance of winning. If she thinks she can see a pattern in Michael's behavior then should she not try and exploit that rather than continue to randomize? Yes.

In reality we know that people are very poor at producing random sequences. So, if you are playing rock-scissors-paper it is highly unlikely your opponents strategy will be completely random. That opens the door for you to do better than 50-50. Note, however, that this means you are not randomizing either. Zero-sum games are not so much, therefore, about how well a person can randomize but more how well they can spot patters in another's behavior.

Are there ever occasions where it makes sense to randomize? Taking a penalty kick in football, serving in tennis, playing poker? The answer seems no. Two absolute experts might just randomize and take their chances, consistent with the game theoretic 'solution'. But, in all likelihood your opponent will not be completely random and that means you shouldn't be either. You just need to be better at predicting your opponent than he is of predicting you.

For an interesting analysis of how this can battle of prediction can be modeled and analyzed see the recent paper by Dimitris Batzilis and co-authors in Games (MDPI). They use level-k theory to analyze choice in the rock-scissors-paper game. It is roughly the case that a player with a higher level of reasoning will win. And experience seems a key factor in level of reasoning.

Wednesday, 9 January 2019

Have you heard of Berge equilibrium? And should you have?

Recently I refereed a paper on the existence of Berge equilibrium. I must confess that until reading the paper I knew nothing of Berge equilibrium. But in my defence, the equilibrium does not get a mention in any game theory textbook on my shelves and, surely most telling of all, does not get an entry in Wikipedia. So, what is Berge equilibrium and should we hear more about it?

The origins of the equilibrium are a book by French mathematician Claude Berge (who does get a Wikipedia page) on a general theory of n-person games, first published in 1957. But it has seemingly gone pretty much unnoticed from then on, although there is a growing literature on the topic as summarized in a 2017 paper by Larbani and Zhukovskii. The basic idea behind Berge equilibrium seems to be one of altruism or cooperation between players in a group.

To explain, consider a game. Let si denote the strategy of player i, s-i the strategies of everyone other than i and ui(si, s-i) the payoff of player i given these strategies.

Nash equilibrium says that player i maximizes his or her payoff given the strategies of others. So, at a strict Nash equilibrium si*, s-i* we have

                                ui(si*, s-i*) > ui(si, s-i*) 

for all i and any other strategy si. This says that player i cannot do better by deviating.

Berge equilibrium says that each player maximizes the payoff of player i given his or her strategy. So, at a strict Berge equilibrium si*, s-i* we have

                                ui(si*, s-i*) > ui(si*, s-i

for all i and for any other strategy s-i. So, the other players do their best to maximize the payoff of player i.

The differences between Nash equilibrium and Berge equilibrium are easy illustrated in the prisoners dilemma. In the game depicted below Fred and William simultaneously have to decide whether to deny or confess. Nash equilibrium says that Fred should Confess because this maximizes his payoff (whatever William does). Berge equilibrium, by contrast, says that Fred should Deny because this maximizes the payoff of William.


Many have argued that Deny is the 'rational' choice in the prisoners dilemma (because both Deny is better than both Confess) and Berge equilibrium appears to capture that idea. Modern game theory, however, provides lots of ways to capture altruism or morality that are arguably more appealing. In particular we can add social preferences into the mix so that if Fred wants to help William then we put that into his payoff function. Then the prisoners dilemma (in material payoffs) is no longer a prisoners dilemma (in social preferences) because Fred maximizes his own payoff by Denying and helping William.

Berge equilibrium only makes sense if everyone is willing to fully sacrifice for others, and that seems a long shot. Unless, that is, players have some connections beyond that usually imagined in non-cooperative game theory. In other words Berge equilibrium may have some bite if we move towards the world of cooperative game theory where Fred and William are part of some coalition. We could, for instance, imagine Fred and William being brothers, part of a criminal gang or players on the same sport team. Here it starts to become more plausible to see full sacrifice. And that it brings us to the concept of team reasoning.

The basic idea behind team reasoning is that players think what is best for us. They act as a cohesive unit, like a family making choices. This looks similar to Berge equilibrium but is actually different. To see the difference consider the coordination game below. For both William and Fred to Cheat is a Berge equillibrium - given that Fred is going to cheat the best thing that William can do for Fred is also to cheat. But mutual Cooperation is clearly better (and also a Berge equilibrium). Team reasoning says unambiguously that both should Cooperate. So, team reasoning is arguably better at picking up sacrifice for the group cause.


Given the tools we have to model social preferences and team reasoning I am skeptical Berge equilibrium will ever get beyond the level of an historical curiosity. But it is still interesting to know that such a concept exists.

Monday, 8 October 2018

Reflections on the Rebuilding Macroeconomics Conference


Last week I had the pleasure of attending the Rebuilding Macroeconomics Conference with a theme of Bringing Psychology and Social Sciences into Macroeconomics. The basic question of the conference seemed to be ‘how can we avoid another financial crisis’ or, from a different perspective, ‘how can we avoid not predicting the next financial crisis’. There was an impressive roll call of speakers from economics, psychology, anthropology, neuroscience, sociology, mathematics and so on with their own take on this issue. Here are a few random thoughts on the conference (with the acknowledgement that I didn’t attend every session).

I was most at home with the talks from a behavioural economics perspective. But it was still great to get extra insight on how this work can be applied to macroeconomics. For instance, Rosemarie Nagel and Cars Hommes gave an interesting perspective on how the beauty contest has real world relevance. Most economists are familiar with the basic idea – people individually write down a number, you find the average, multiple by 2/3 to get the winning number, and being close to the winning number is good. No doubts this is a great game to play in the lab to pick apart strategic reasoning and learning. The new insight for me is how to connect the game directly with macro behaviour. Basically, the world is one big beauty contest. Both Nagel (focussing more on strategic reasoning) and Hommes (on learning) gave us a picture of how to apply our knowledge of the beauty contest to inform macro debate.

Still on familiar territory for me, David Laibson gave some updated results on present bias. The main focus here is how we can explain the average person simultaneously having a large credit card debt (at high interest) and large savings (at low interest rates). The answer, according to Laibson, is that we have present bias (and are naïve about it). This means we tend to focus on today, putting off difficult things until tomorrow; until we get to tomorrow and then we put it off until the next day. For connoisseurs of this the estimated beta discount factor to explain observed bahaviour is 0.5; which basically means today is a lot, lot more important than tomorrow. This implies that people are going to put off things they should do, like save for retirement, and so there is a remit and rationale for governments to come in and take some control of important decisions.

Another session with a behavioural economics feel was panel 2 with talks by Sam Johnson and Henry Brighton. Johnson gave a great talk on how people may fail to take into account ‘grey swan’ events. The basic idea here is that the person thinks something is reasonably likely to occur, e.g. there is a 20% chance Donald Trump will do a good policy, but when it comes to making a decision they essentially ignore this possibility, they think there is no chance Trump will do a good policy. This can lead to overconfidence or excess pessimism. The thing I would pick up on here is that this presentation, like that of Laibson and others, emphasized some of the ‘dumb’ things humans do. Brighton, by contrast, gave the ecological rationality viewpoint (most closely associated with Gerd Gigerenzer) that humans are remarkably clever at making decisions. I think it is fair to say that Brighton got a tough run in the subsequent discussion with a fairly hostile audience. That surprised me a little because the ecological rationality argument surely has some tractability. Maybe, however, in a conference on trying to avoid another financial crisis the selling point of ‘don’t worry, humans are very clever’ doesn’t seem to offer much of a solution.

And that brings me to my main overall reflection on the Conference, which is perhaps best summarized by ‘where were the macro-economists?’. To be fair, there were some macro people in the room but even they seemed unwilling to go far in defending DSGE modelling and the current state of mainstream macro-economics. I am no macro-economist but I do sense we may be reaching a turning point in the evolution of economic ideas. A turning point in which mainstream macro becomes something of an irrelevance. There is no doubt that macro-economists will carry on churning out mathematical models, publishing in top journals, and celebrating their success. But is this stuff any use? Does it give us anything? This conference was packed with people from other fields who arguably have more to contribute when it comes to predicting the next financial crisis. Maybe policy makers will start listening to them a bit more than the results of the latest DSGE model? If so, that means we are entering a long period of flux before a coherent new macroeconomics is born.

To pick up on one example, I was particularly taken by the role that anthropology can play. Douglas Holmes set the scene in looking at central bank decision making. Then Charles Stafford gave a very compelling argument that economists need to read anthropology. He used the example of Taiwanese fisherman deciding whether to choose the high risk, financially rewarding option or low risk, less rewarding option to illustrate the complexities of decision making (and the role of religion). The title of his talk ‘Economic life in the real world’ sums it up nicely. Economists can learn from pocking their head above the simplicity of our mathematical models to see what actually happens when people make economic decisions. But, lets be honest, it is a long step from conferences like this to building a new macro that incorporates such perspectives. 

And then we get to talk of Andrew Caplin which nicely drew together various themes in the conference. Caplin reflected on his work about bank runs financial crises before focussing on the theme of data. He argued that a crisis typically comes about from a ‘predictable’ collapse in confidence. Everyone is chugging along thinking things are bad but maybe it will pick up; then one firm falls and everyone else falls with them. If we could tap into people thinking ‘things are bad’ and understand the linkages between firms then we would have the data to get on top of these things earlier. But are we going to get data? Caplin explained that collecting this data is going to require a long term, big team approach. And that is not what economists are good at. He, therefore, was sceptical it will happen. Let’s hope it does. Either way it seems that rebuilding macroeconomics may take some time.      



Saturday, 8 September 2018

Social value orientation in economics part 2 - slider method

In a previous blog post I looked at social vale orientation (SVO) and one method to measure it, namely the decomposed game or ring technique. Here I will look at a second way of measuring SVO called the slider method. This method, due to Ryan Murphy, Kurt Ackermann and Michel Handgraaf is relatively new and has some nice advantages. While most existing studies use the ring technique I would expect the slider method to become the method of choice going forward. So, it is good to know how it works. 

Recall that the basic idea behind social value orientation (SVO) is to gain a snapshot of someone's social preferences. Are they selfish and simply do the best for themselves without caring about the payoff of others? Are they competitive and want to earn more than others (even if that means sacrificing own payoff)? Are they inequality averse and want to earn the same as others? Or are they pro-social and want to maximize the payoff of others? 

One way to categorize SVO is on a circle in which own payoff can be traded-off for that of another person. This is illustrated in the figure below. An altruist gives maximum to the other person, an individualist maximizes own payoff, a pro-social person maximizes joint payoff and a competitive person is willing to pay to lower the payoff of another person. Recall that the ring method asks someone to choose between 24 pairs of choices all around the circle. This allows us to categorize, in principle, where the person's preferences lie on the circle. But, the task is not that easy meaning that many subjects are going to make inconsistent choices etc.




The slider method gets straight to the heart of the matter by asking 6 questions that compare each pair of SVO categories. To illustrate, compare altruistic versus competitive. Suppose we draw a line between the altruistic choice of 50 for me and 100 for the other and the competitive choice of 85 for me and 15 for the other.



We can then ask a person where on that line they would choose to be. One of the slider method questions does just that. In the pen and paper version it could look like this.


The other combinations are altruist versus individualist, altruistic versus pro-social, pro-social versus individualist, pro-social versus competitive, and individualist versus competitive. That gives the 6 questions below. Combining the answers from the 6 questions gives an aggregate measure of where the person's preferences lie. Specifically, Murphy, Ackermann and Handgraaf suggest taking the average a person gives to self and the average given to other and measuring SVO by the resulting angle.



For instance, consider the set of choices below over the 6 questions. The average amount given to self is 81.5 and that to other is 76.5. Subtracting 50 to normalize around 0 this gives a ratio of 0.84 = 26.5/31.5 and an angle of 40 degrees. This is someone who is pro-social. It should be said, however, that Murphy, Ackermann and Hangraaf are clearly not keen on putting boundaries between classifications but prefer the continuous measure given by the angle. Someone with an angle of 40 degrees is 'close' to the 'ideal' pro-social who would have 45 degrees.



And that is the slider method. The beauty is its simplicity. This is a task subjects should be able to readily understand and can do relatively quickly. On this criterion it does better than the ring method. But the method also gives a continuous, detailed measurement. On this criterion it does better than other simple methods of eliciting SVO. So, there is a lot to like about the slider method! And, if necessary, another 9 questions can be used to distinguish between joint maximization and inequality aversion amongst pro-social types. 

The slider method is clearly something that could be used to good effect in experimental economics. Given how fresh it is there are not too many examples out there and many are still using the ring method. But, that will surely change. One recent study that does use the slider method is by Dorothee Mischkowski and Andreas Glockner on 'Spontaneous cooperation for prosocials, but not proselfs: Social value orientation moderates spontaneous cooperation behavior'. 

They look at the spontaneous cooperation hypothesis that people's instinct is to cooperate. Given that the instinct is to cooperate, the longer a decision takes the less cooperation we will observe (because rationality takes over from instinct). Mischkowski and Glockner do indeed find that a longer decision time in a public good game correlates with lower contributions. The new insight is to show that this only holds for pro-socials as illustrated below. In this study pro-socials are those with an angle more than 22.45 degrees. So, if you want a nice person to do a nice thing - don't give them time to think about it.