Skip to main content

Why the rationale for nuclear weapons requires a little madness

The UK will soon have to decide whether to maintain its Trident nuclear weapon programme. Clearly, the nuclear capability will be maintained. This has not, though, stopped a fairly vociferous debate on the issue. The basic argument in favour of nuclear weapons, and one that we have heard time and time again in the debate, is that nuclear weapons are to deter attack and not be used. This is encapsulated in the concept of mutual assured destruction or MAD. But, just how solid is the MAD argument?
           A standard logic goes something like this: If the UK has nuclear weapons then Russia would not attack the UK because the UK would have the capability to destroy Russia. Thomas Schelling, in the Strategy of Conflict, pointed out that there is a basic flaw in this logic. To see why let us set out a hypothetical game tree, see below. Russia moves first by deciding whether to attack the UK. Then the UK decides whether to retaliate in the case of attack. We can see that if the UK will retaliate then Russia does best to not attack (and get payoff 100) than attack (and get payoff 0). Not attack, and retaliate if attacked, is indeed a Nash equilibrium of this game.

          But, here is the problem: the threat to retaliate is not credible. If Russia attacks then the UK is going to be destroyed. Nothing can stop this from happening once Russia has pressed the go button. Retaliation serves, therefore, purely as an act of revenge. And one that will kill millions of innocent people. Would the UK prime-minister press the retaliation button if the sole consequence of doing so would be the death of millions of innocent people? Probably not. Which is why not retaliate earns a higher payoff (you at least die with the moral high ground) than retaliate (where you die with the guilt of killing millions). Thus, the sub-game perfect Nash equilibrium in this game is for Russia to attack and the UK to not retaliate.
          So, how can nuclear weapons work as a deterrent? One possibility is to put an automatic trigger in the system so that the UK has no choice but to retaliate in the case of attack. This though is not compatible with the basic notion that a human being should ultimately be responsible for such an act. Another possibility is to have a prime-minister who would be 'mad enough' to exact revenge. To illustrate, suppose the payoffs are given as below. Notice that the UK now does best to retaliate. Indeed, the unique Nash equilibrium now sees Russia not attack because the threat of retaliation is credible.  

            As previously discussed, it is hard to imagine that the payoffs really would be like this. The prime-minister must be extremely vengeful if he would kill millions of innocent people. Crucially, though, the mere possibility that the UK might be willing to exact revenge can be enough to deter attack. To explain, let p denote the probability that the UK has a 'mad' prime-minister who would retaliate. If Russia does not attack then it gets 100 for sure. If it attacks then it gets 0 with probability p and 200 with probability 1 - p. If p > 0.5 then, unless Russia is a risk seeker, it is safer to not attack. A 50% chance the UK prime-minister is mad seems, however, unlikely.
            But, the payoffs we have been using are completely arbitrary. Suppose we let W denote the payoff to Russia if it attacks and the UK does not retaliate. (So far we have assumed that W = 200.) Now Russia will find it safer to not attack if 100 > W(1 - p). If W is close to 100, say 105, then a small chance of madness, say p = 0.05, is enough to deter an attack. The success of the nuclear deterrent depends, therefore, on two crucial things: (i) that the gain from 'winning' is relatively small, and (ii) there is some chance the UK will be mad enough to retaliate.
           I would suggest that point (i) was actually the main reason the Cold War ended peacefully. Russia, the US, UK, France etc. had little to gain from destroying the other. We should not, however, neglect point (ii). Critics of nuclear weapons often claim that they are of no use because the UK would simply not be attacked in the first place, nuclear weapons or not. This view seems naïve: history tells us that wars are a sad reality of life. Nuclear weapons are important in deterring conflict, provided you let others think that you might be mad enough to use them.    

Comments

Popular posts from this blog

Revealed preference, WARP, SARP and GARP

The basic idea behind revealed preference is incredibly simple: we try to infer something useful about a person's preferences by observing the choices they make. The topic, however, confuses many a student and academic alike, particularly when we get on to WARP, SARP and GARP. So, let us see if we can make some sense of it all.           In trying to explain revealed preference I want to draw on a  study  by James Andreoni and John Miller published in Econometrica . They look at people's willingness to share money with another person. Specifically subjects were given questions like:  Q1. Divide 60 tokens: Hold _____ at $1 each and Pass _____ at $1 each.  In this case there were 60 tokens to split and each token was worth $1. So, for example, if they held 40 tokens and passed 20 then they would get $40 and the other person $20. Consider another question: Q2. D...

Nash bargaining solution

Following the tragic death of John Nash in May I thought it would be good to explain some of his main contributions to game theory. Where better to start than the Nash bargaining solution. This is surely one of the most beautiful results in game theory and was completely unprecedented. All the more remarkable that Nash came up with the idea at the start of his graduate studies!          The Nash solution is a 'solution' to a two-person bargaining problem . To illustrate, suppose we have Adam and Beth bargaining over how to split some surplus. If they fail to reach agreement they get payoffs €a and €b respectively. The pair (a, b) is called the disagreement point . If they agree then they can achieve any pair of payoffs within some set F of feasible payoff points . I'll give some examples later. For the problem to be interesting we need there to be some point (A, B) in F such that A > a and B > b. In...

Prisoners dilemma or stag hunt

Over Christmas I had chance to read The Stag Hunt and the Evolution of Social Structure by Brian Skyrms. A nice read, very interesting and thought provoking. There’s a couple of things in the book that prompt further discussion. The one I want to focus on in this post is the distinction between the stag hunt game and the prisoners dilemma game.    To be sure what we are talking about, here is a specific version of both type of game. Adam and Eve independently need to decide whether to cooperate or defect. The payoff matrix details their payoff for any combination of choices, where the first number is the payoff of Adam and the second number the payoff of Eve. For example, in the Prisoners Dilemma, if Adam cooperates and Eve defects then Adam gets 65 and Eve gets 165. Prisoners Dilemma Eve Cooperate Defect Adam Cooperate 140, 140 65, 165 Defect 165,...