In This Issue
Engineering and Homeland Security
March 1, 2002 Volume 32 Issue 1

Cybersecurity

Friday, March 1, 2002

Author: Wm. A. Wulf and Anita K. Jones

Ensuring our cybersecurity will require long-term, innovative basic research.


Although the nation is at great risk from cyberterrorism, we have virtually no research base on which to build truly secure systems. Moreover, only a tiny cadre of researchers are thinking deeply about long-term solutions to this problem. If the problem were merely a matter of implementing techniques that are known to be adequate, this might not be a serious issue. But the truth is that we do not know how to build secure computer systems. The only model widely used for cybersecurity is the "perimeter defense" model -- which is demonstrably fragile. In fact, for deep theoretical reasons, it is impossible to guarantee that a perimeter-defense system will ever work! To be sure, many immediate problems of cybersecurity can be handled by implementing or enforcing known "best practices" -- such as patching software each time a new attack is successful. But solving the fundamental problem will require long-term, innovative basic research.

No one knows how vulnerable we really are because the most costly attacks have not been made public. But we are probably a lot more vulnerable than we’d like to be, and maybe more vulnerable than we can survive! Financial cybersystems have been attacked but have not disclosed damage and losses in order to preserve an image of their integrity. Military systems have also been attacked, but the most serious attacks have not been disclosed. (It has been reported, however, that more than 60 percent of military computers have been compromised [GAO, 2001].) We know that national defense computers and networks use the same software and hardware as the general public -- and thus are subject to the same kinds of attacks. In addition, they are a juicy target for sophisticated, state-sponsored intruders who want to determine our military preparedness.

To exacerbate things, our legal system prevents the exchange of information about attacks, thus preventing one organization from learning from the experiences of others. In anticipation of Y2K problems, Congress passed special legislation enabling corporations to exchange information (and limit liability). But no such legislation has been passed to permit the exchange of cybersecurity information. Other laws -- laws to protect civil liberties, for example -- prohibit the exchange of information among some government agencies. Although this is an admirable goal, it does make cybersecurity more difficult.

The bottom line is that no one knows exactly how vulnerable we are! We can get an idea of the magnitude of the problem, however, from public information. The 1997 Presidential Commission on Critical Infrastructure Protection focused on cybersecurity, although the commission’s charter included power, water, communications, financial, and other infrastructures. In its report, the commission found that "all our infrastructures [are] increasingly dependent on information and communications systems... [that] dependence is the source of rising vulnerabilities, and therefore, it is where we concentrated our efforts" (Presidential Commission on Critical Infrastructure Protection, 1997). In other words, all forms of infrastructure are so vulnerable that the commission decided to all but ignore other vulnerabilities. Information technology has become crucial to every aspect of modern life, and a serious attack could cripple any system, including systems used for an emergency military deployment, health care delivery, and the generation of electrical power.

The worst-case scenarios are chilling. Consider a really sophisticated attack on our financial systems. We’re not talking about a simple virus, or even the theft of funds; we’re talking about the incapacitation or destruction of parts of an infrastructure on which all commerce depends. Just imagine a month, a week, or even a day in which no checks are cashed or salaries deposited, no stocks are traded, no credit card purchases are honored or loans processed -- in short, a day on which all commerce comes to a halt.

But the bottom line is that we don’t know. Publicly reported attacks have been relatively unsophisticated and, although annoying, have not had dire consequences. The unreported attacks have been more serious, but the details have not been made known to the public -- or, in some cases, even to the responsible public officials. Potential attack scenarios are even worse -- but the probability that they will happen is simply not known.

Our critical systems have many vulnerabilities, ranging from errors in software to trusted, but disgruntled, employees to low-bid software developers outside the United States. But the problem goes much deeper. In many cases, attackers have found clever ways to combine two or more features of a system in ways the designers had not foreseen. In these cases, undesirable behavior results from correctly implemented software. In addition, software vendors have found that the public is not willing to pay for security. Buyers do not choose more secure products over less secure ones, especially if they must pay a premium for them, so venders have not invested in security. But the overriding, fundamental source of vulnerability is that we do not have a deep understanding of the problem or its solution; and little if any research is being done to develop that understanding.

How prepared are we to respond? There are different answers for the short term and the long term, and to some extent there are different answers for the military, the private sector, financial institutions, and other communities. Unfortunately, the only short-term solution is to keep putting our fingers in the dike -- to patch holes in systems as we discover them. To be effective, this requires that every member of a vast army of system administrators and users be vigilant. Alas, the evidence shows that widespread vigilance is extraordinarily hard to achieve.

Equally unfortunate, the Internet is essentially a monoculture -- almost all of the computers connected to it are IBM compatible. Because they use a single operating system and set of applications, a would-be attacker only has to find a vulnerability in any part of the system to attack the vast majority of computers connected to the network. That is why attacks all seem to spread so rapidly.

One of the principal findings of the Presidential Commission on Critical Infrastructure Protection was that research and development are not adequate to support infrastructure protection. For historical reasons, no single federal funding agency has assumed responsibility for supporting basic research in this area -- not the Defense Advanced Research Projects Agency, not the National Science Foundation, not the U.S. Department of Energy, and not the National Security Agency. As a result, only relatively small, sporadic research projects have been funded, and the underlying assumptions on cybersecurity that were established in the 1960s mainframe environment have not be questioned. When funds are scarce, researchers become very conservative, and bold challenges to the conventional wisdom are not likely to pass peer review. As a result, incrementalism has become the norm. Thus, no long-term cybersecurity solution has been developed, or even thoroughly investigated.

Four critical needs must be met to improve cybersecurity:
  • the need for a new model to replace the perimeter defense model
  • the need for a new definition of cybersecurity
  • the need for an active defense
  • the need for coordinated activities by cybercommunities, legal system, and regulatory systems

A New Model
Most research on cybersecurity has been based on the assumption that the "thing" we need to protect is "inside" the system. Therefore, we have developed "firewalls" and other mechanisms to keep "outside" attackers from penetrating our defenses and gaining access to the thing and taking control of the system. This perimeter defense model of computer security -- sometimes called the Maginot Line model -- has been used since the first mainframe operating systems were built in the 1960s. Unfortunately, it is dangerously, even fatally, flawed.

First, like the Maginot Line, it is fragile. In WWII, France fell in 35 days because of its reliance on this model. No matter how formidable the defenses, an attacker can make an end run around them, and once inside, can compromise the entire system. Second, the model fails to recognize that many security flaws are "designed in." In other words, a system may fail by performing exactly as specified. In 1993, the Naval Research Laboratory did an analysis of some 50 security flaws and found that nearly half of them (22) were designed into the requirements or specifications for correct system behavior! Third, a perimeter defense cannot protect against attacks from inside. If all of our defenses are directed outward, we remain vulnerable to the legitimate insider. Fourth, major damage can be done without "penetrating" the system. This was demonstrated by the distributed denial-of-service attacks on Yahoo and other Internet sites two years ago. Simply by flooding the system with false requests for service, it was rendered incapable of responding to legitimate requests. We can be grateful that so far denial-of-service attacks have been directed against Internet sites and not against 911 services in a major city! Fifth, the Maginot Line model has never worked! Every system designed with a Maginot Line-type notion of security has been compromised -- including the systems the authors built in the 1970s. After 40 years of trying to develop a foolproof system, it’s time we realized that we are not likely to succeed. Finally, the perimeter defense cannot work for deep theoretical reasons. Unfortunately, we don’t have enough space here to explain. Suffice it to say that replacing the perimeter defense model of computer security is long overdue!

Redefinition of Cybersecurity
The second critical need for cybersecurity is to redefine "security." The military definition of security emphasizes controlling access to sensitive information. This is the basis of the compartmentalized, layered (confidential, secret, top secret) classification of information. A somewhat broader definition of security used in the computing research community includes two other notions: "integrity" and "denial of service." Integrity implies that an attacker cannot modify information in the system. In some cases, medical records for instance, integrity is much more important than secrecy. We may not like it if other people see our medical records, but we may die if someone alters our allergy profile. Denial of service means that the attacker does not access or modify information but denies users a service provided by it. This relatively unsophisticated form of attack can be used against phone systems (e.g., 911), financial systems, and, of course, Internet hosts. Because more than 90 percent of military communications are sent via the public telephone network, attackers might seriously disrupt a military activity, a deployment say, simply by tying up the phone lines at appropriate bases and logistics centers.

Practical definitions of security must be more sophisticated than the simple privacy, integrity, and denial of service formula, and they must be tailored for each kind of entity -- systems for credit cards, medical records, tanks, flight plans, student examinations, and so forth. The notion of restricting access to a credit card to individuals with, say, secret clearance is nonsensical. Other factors, such as the timing, or at least the temporal order, of operations, correlative operations on related objects, and so on, are essential concepts for real-world security. (It used to be said that the best way to anticipate major U.S. military operations was to observe any increases in pizza deliveries to the Pentagon).

The military concept of sensitive but unclassified information has a counterpart in the cyberworld. Indeed, the line between sensitive and nonsensitive information is often blurred in cyberspace. In principle, one must consider how any piece of information might be combined with any other pieces of information to compromise our security. With the vast amount of information available on the Internet and the speed of modern computers, it has become all but impossible to anticipate how information will be combined or what inferences can be drawn from such combinations.

Different information sets stored in the same computer must be protected differently. The new model of cybersecurity should be appropriate to the context of the user applications for which that information is used. The simple model of a "penetration" attack does not reflect these realistic security concerns. Hence, analyzing the vulnerability of a system in terms of the perimeter defense model is unlikely to reveal its true vulnerabilities.

Active Defense
The third critical need for cybersecurity is for an active defense. Not all experts agree, but based on our experience over the past 30 years, we have concluded that a passive defense alone will not work. Effective cybersecurity must include some kind of active response -- a threat or a cost higher than the attacker is willing to pay -- to complement the passive defense.

Developing an active defense will be difficult because identifying the source of an attack is difficult. The practical and legal implications of active defenses have not been determined, and the opportunities for mistakes are legion. The international implications are especially troublesome. It is difficult, usually impossible, to pinpoint the physical location of an attacker. If it is in another country, a countermeasure by a U.S. government computer might even be considered an act of war. Resolving this and related issues will require a thoughtful approach and careful international diplomacy. We desperately need long-term basic scholarship in this area.

Coordinated Activities
Any plan of action must also involve a dialog on legal issues, the fourth critical need for cybersecurity. At least two kinds of issues should be addressed soon: (1) issues raised in cyberspace that do not have counterparts in the physical world; and (2) issues raised by place-based assumptions in current law. The first category includes everything from new forms of intellectual property (e.g., databases) to new forms of crime (e.g., spamming). Issues of particular interest to this discussion are rights and limitations on active countermeasures to intrusions -- indeed, determining what constitutes an intrusion. Issues raised by place-based assumptions in current law include many basic questions. How does the concept of jurisdiction apply in cyberspace? For tax purposes (e.g., sales taxes), where does a cyberspace transaction take place? Where do you draw the line between national security and law enforcement? How do you apply the concept of posse comitatis?

Not all of these issues are immediately and obviously related to cybersecurity. But cyberspace protection is a "wedge issue" that will force us to rethink some fundamental ideas about the role of government, the relationship between the public and private sectors, the balance between rights of privacy and public safety, and the definition of security.
The security of our information infrastructure and other critical infrastructures will be a systems problem, as well as a significant research challenge. We believe that a particular government agency must take on the mission of revitalizing research in cybersecurity with the following objectives:
  • the development of wholly new methods of ensuring information system security
  • the development of a larger research community in cybersecurity
  • the education of computer system and computer science majors in cybersecurity at the undergraduate level, which would eventually improve the state of the practice in industry
Achieving these goals will require a guarantee of sustained support over a long period of time as an incentive to researchers to pursue projects in this area.

In the past few months, members of the House Science Committee have held hearings1 on the state of research on cybersecurity and have introduced three acts that would provide initial funding for basic research through the National Science Foundation and the National Institute of Standards and Technology.2 Although these initiatives are heartening, their full impact will not be felt for a decade or more. Historically, policy makers have not continued to support research with such long horizons. However, in the aftermath of September 11, we are hopeful that Congress is now ready to provide stable, long-term funding for this high-risk research.


References
  • GAO (General Accounting Office). 2001. Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program. Washington, D.C.: General Accounting Office.
  • Presidential Commission on Critical Infrastructure Protection. 1997. Critical Foundations: Protecting America’s Infrastructures. Washington, D.C.: U.S. Government Printing Office.


    Footnotes
    1 For the text of written testimony by Wm. A. Wulf, see the NAE website <http://www.nae.edu> under "News & Events/National Academy of Engineering Counterterrorism Activities."
    2 H.R. 3316 Computer Security Enhancement and Research Act of 2001, H.R. 3394 Cyber Security Research and Development Act, and H.R. 3400 Networking and Information Technology Research Advancement Act.
  • About the Author:Wm. A. Wulf is president of the National Academy of Engineering. Anita K. Jones is a member of the NAE and the Lawrence R. Quarles Professor of Engineering and Applied Science at the University of Virginia.