Home > News > The political science of cybersecurity III – How international relations theory shapes U.S. cybersecurity doctrine
109 views 17 min 0 Comment

The political science of cybersecurity III – How international relations theory shapes U.S. cybersecurity doctrine

- February 20, 2014

(Mark J. Terrill/AP)
U.S. cybersecurity officials have been much more skeptical about international cooperation than their Cold War counterparts were. They have been suspicious of suggestions that international treaties (along the lines of the arms control treaties of the Cold War) could provide real peace. They talk happily about U.S. cyber-defense, but have usually been reluctant even to acknowledge that the United States is able to attack other countries’ information systems, let alone to talk details, or discuss whether cyberattacks should be restrained. It may seem strange that the United States should be more distrustful of other countries over cybersecurity than the prospects of nuclear Armageddon. Yet there is a subtle logic to their position, which is a product of political science debates about international security.
To understand U.S. cybersecurity doctrine, it’s necessary to understand the basics of two important political science debates. First, debates over the offense-defense balance tried to analyze the underlying logic of the ‘security dilemma,’ which can drive states to arm themselves against other states that are not hostile to them. Second, debates over deterrence sought to figure out both how Cold War adversaries could come to a mutual understanding that would make nuclear war less likely, and to seek advantage for the United States within this understanding.
The fundamental logic of the security dilemma is straightforward. Imagine two neighboring states, each of which wants peace while not being sure of the other’s intentions. Imagine further that one of the states decides to build up its military (perhaps by increasing its army), solely in order to defend itself if the other state turns out to have malign intentions. It may well be that the second state looks at the first state’s decision to increase the size of its army, and worries that the first state is beefing up its military so that it can invade. The second state may then decide, too, to build up its army. This may, in turn alarm the first state, which begins to fear that the second state is indeed intent on invasion, leading the first state to introduce conscription. And this process may go on, leading two states, neither of which really has anything to fear each other, confronting each other with profound suspicions and big and expensive militaries, and perhaps getting embroiled in war.
Robert Jervis argues in a famous (among political scientists) article that this spiral of mutual distrust is more likely when offense prevails over defense; that is when “it is easier to destroy the other’s army and take his territory than it is to defend one’s own.” Under these conditions, states’ best way of protecting themselves may be through offensive measures and surprise attacks. Conversely, when defense prevails over offense, and it “is easier to protect and to hold than it is to move forward, destroy, and take,” it will be easier for states to protect themselves through defensive rather than offensive measures, and less likely that states will feel insecure. Jervis also suggests that spirals of mistrust are less likely to happen when states can easily distinguish between defensive and offensive weapons. Fortifications are clearly defensive weapons — if you build forts inside your territory, other states are less likely to take alarm. If however, you build submarines, which can be used both offensively and defensively, other states may be less sure how to interpret your intentions. If Jervis’s arguments are right, there is least risk of unnecessary war when defense prevails over offense, and where it is easy to distinguish between defense and offense. Conversely, suspicions will be highest when offense prevails over defense, and where offense and defense are hard to distinguish.
Debates over deterrence focus instead on signalling and credibility. If defense is the art of protecting yourself against an enemy once she has attacked you, deterrence is the art of ensuring that the enemy never attacks you in the first place. Nobel Prize winner Thomas Schelling argues that the best way to deter attack is usually to commit yourself to retaliating in ways that will hurt your attacker, and to signal this to those who might attack you. If your commitment is credible (that is, attackers are likely to believe you) then they will refrain from attacks that might result in serious retaliation.
Deterrence theory had a profound influence on Cold War thinking (and was parodied in movies like “Dr. Strangelove”). It often had perverse implications. For example, Schelling argued that U.S. troops stationed in West Berlin were not there to defend the city against Soviet invasion. Instead, they were there “to die.” If the Soviets attacked West Berlin, they knew that many American troops would be killed. They also knew that this would lead to enormous public anger, which would force the United States to retaliate and very likely to declare war against the Soviet Union. Since the Soviet Union feared nuclear war more than it wanted to take West Berlin, it never attacked the city directly (although it did, famously, try to enforce a blockade).
These theories have profoundly shaped U.S. doctrine on cybersecurity. First, U.S. strategic thinkers believe that offense prevails over defense in cybersecurity, and that it is often very difficult to tell offensive weapons from defensive ones. The fundamental problem of cybersecurity (as Pentagon officials see it) is that it is far easier to attack others’ information systems than it is to defend one’s own. Computer systems tend to have bugs and weaknesses which can be exploited by sophisticated attackers to penetrate the system, steal secrets and take control. Efforts to fix these bugs often produce new ones. There is a thriving marketplace for information about unrevealed bugs in widely used software, so called ‘zero day exploits,’ which governments (including the U.S. government) buy in order to figure out how to exploit others’ computer systems. In a world of buggy defenses, attacks are likely to succeed relatively often, and defenses are correspondingly likely to fail.
Furthermore, the key ‘weapons’ in this new world are well trained hackers. Such hackers can play a defensive role, by figuring out the weaknesses in information systems and helping to patch them up. They can also play an important offensive role, by exploiting the problems in others’ systems. So called ‘white hat’ hackers (who shore up defenses) can become ‘black hat’ hackers (who exploit problems, more or less at the drop of a hat. There is no real difference between offensive hackers and defensive ones — they have largely overlapping skill sets.
This means that cybersecurity is in the worst of all possible worlds. Offense prevails over defense, and it will often be difficult to distinguish between defensive and offensive measures. In such a world, distrust is likely to be endemic. States will be fearful of each others’ intentions, and tempted to protect themselves through attacking others and undermining their capabilities rather than simply relying on defensive systems.
U.S. strategists also believe that deterrence is far more problematic in cybersecurity than it was in the Cold War, because of what they have dubbed the “attribution problem.” In a nuclear war, Hollywood thrillers notwithstanding, it would usually be easy to figure out who had launched a nuclear weapon and to retaliate accordingly. In cybersecurity, it isn’t easy at all. It is often possible for attackers to hide their origins, through various technical means. And even when forensic techniques can be used to trace an attack back (say, to an IP address in China), it is often impossible to tell whether the hackers were working, for example, for the Chinese government or military, or working on their own account.
This makes deterrence hard. Deterrence usually rests on the threat of credible punishment. If you do not know who attacked your computer system, you cannot credibly threaten to retaliate against them. Even if you have a pretty good idea of who the attacker was, you may be less likely to retaliate since you cannot prove it, and other states might disbelieve your claims, and condemn you as the aggressor. Even when you have solid evidence, you may not be able to produce it in public, since it may give the attacker (or other potential attackers) information on your systems and defenses that they can use to avoid detection in future. If you do not know who attacked you, you cannot credibly threaten to punish them; if you cannot credibly threaten to punish them, you cannot easily deter them from attacking you.
These ideas motivate a 2010 Foreign Affairs article by former U.S. Deputy Secretary of Defense William Lynn, which still stands as the most complete public statement of the U.S. stance on cybersecurity. Lynn argues that “[i]n cyberspace, the offense has the upper hand,” so that the U.S. ability to defend its systems will always lag behind its adversaries’ abilities to exploit them. He also claims that “traditional Cold War deterrence models of assured retaliation do not apply to cyberspace, where it is difficult and time consuming to identify an attack’s perpetrator.” This means that the most that the United States can do to deter attackers is to deny them any benefits from attacking (what deterrence theorists call ‘deterrence through denial’), creating strong defenses that minimize the likelihood of successful attacks and hence slightly discourage attackers from trying to breach systems in the first place.
Lynn’s article spends a lot of time describing the strength of U.S. defenses against cyber-incursions. This makes sense, given the Pentagon analysis of deterrence. If all that you can do is to deter by denying, it is helpful to publicize how strong your defenses are, so as to discourage potential attackers. However, he says nothing about U.S. offensive capacities. This is not an accident; other U.S. senior officials have been only slightly more forthcoming. It reflects the U.S. belief that traditional deterrence doesn’t work in cybersecurity. If it did, then the United States would gain benefits by publicizing how effective their weapons were (and hence making it clear that the United States had a strong retaliatory capacity). Since U.S. officials believe that deterrence does not work, they are reticent in describing their attack capabilities. Explicit description could have many downsides (encouraging other states, for example, to arm up in cyberspace too), and few obvious upsides. While the United States does have strong offensive capacities, it sees no good reason to publicize them.
Similar misgivings underlie U.S. reluctance to start working toward broad cybersecurity treaties. The attribution problem means that it will be difficult to punish bad behavior. Furthermore, unlike troops movements (which could be monitored by satellites) or nuclear facilities (which could be inspected by specialized international agencies), it is really, really hard to monitor cybersecurity facilities. In principle, all you need to build such a facility are talented people and good computer equipment. Both of these are cheap (compared to standard weapons) and very, very easy to hide from inspectors. If you cannot monitor states’ cybersecurity activities, then treaties, which are supposed to control these activities, are at best going to be of limited value. They will not be able to stop states from cheating.
It’s not clear that the political science arguments underlying U.S. cybersecurity policy are necessarily correct. Many political scientists have challenged Jervis’s arguments about offense and defense. Deterrence theory is beautiful and elegant, but may explain less behavior than it seems to at first (it tends to assume that people are rational, or at least rationally able to exploit their irrationality). There are technical reasons why these theories may not fit as neatly with cybersecurity problems as many strategists believe. At the least, there is space for plenty of argument.
Yet whether they are right or not, they are clearly enormously influential. One simply cannot understand current U.S. doctrine on cybersecurity without understanding the political science debates and theories that lurk behind it. Pundits inclined to dismiss political science as being irrelevant should take note. The world of the Cold War was, for better or worse, partly built on the foundation of political science ideas. The same is true of the emerging world of cybersecurity.