Download PDF Fall Issue of The Bridge on Cybersecurity September 19, 2019 Volume 49 Issue 3 This issue features selected papers intended to provide a basis for understanding the evolving nature of cyber-security threats, for learning from past incidents and best practices, and for anticipating the engineering challenges in an increasingly connected world. A Framework to Understand Cybersecurity Thursday, September 19, 2019 Author: David D. Clark Better cybersecurity is an admirable aspiration. But aspirations, as such, are not actionable. Calling for better cybersecurity does not give any hint of what actions should be taken, and by whom, to improve the situation. The goal of this paper is to break the challenge of improved cybersecurity into parts that are potentially actionable, provide a roadmap to better security, and illustrate why the challenge of better security is so vexing. Since the internet is at the heart of cyberspace—without the internet or something similar there would just be a bunch of disconnected computers—I take an internet-centric view of security in this paper. There are, of course, other opinions about how to structure the thinking about security, but I find that this framework can provide good insights about which actors have to do what to improve the situation. I consider four types of internet security challenges: (a) third-party attacks on mutually trusting users trying to communicate, (b) attacks by one user on another, (c) attacks on the network itself, and (d) denial of service attacks. Third-Party Attacks on Mutually Trusting Users In this case, two (or more) users are attempting to have a mutually desired communication and a hostile third party attacks it. This situation affects information security, which is traditionally described as having three sub-components: confidentiality, integrity, and availability. The goal of confidentiality is to ensure that a hostile third party cannot observe what is being communicated. The motivation of that third party might range from demographic profiling for targeted advertising to content-based censorship to national security surveillance. A strong mechanism for confidentiality will ensure that these goals are thwarted. The goal of integrity is that a third party cannot modify what is being sent by the communicants. The goal of availability is that communication can still succeed despite attempts by a third party to disrupt it. With respect to confidentiality and integrity, the current approach in the internet is end-to-end encryption. If a transmission is encrypted, a third party cannot observe it, and while encryption cannot prevent modification, the modification will always be detected, so the outcome may be a failure of availability but the attacker cannot actually modify what is being sent. As tools for encryption are embedded in the most common applications (such as the Web) the fraction of encrypted traffic is increasing rapidly. This outcome is the result of efforts over many years by researchers, standards setters, and implementers. The Limits of Encryption Encryption is not magic. Just using it does not make all the problems go away. Its mathematics may be -beautiful, but the actual encryption is embedded in a larger system that has the task of managing the encryption keys, the information that must be shared among the parties so that the encryption can work. The weak part of most encryption schemes is not the algorithm but how the keys are managed. If an attacker can steal the keys (which may just be stored on a user’s personal -computer—not the most secure device on the internet), or otherwise manipulate the key management system, they can completely thwart the goals of confidentiality and integrity. Availability also cannot be improved using encryption. If an attacker is in a position to block the traffic between communicants, the only benefit of encryption is to blunt the attacker’s instrument—if the attacker cannot see what is being sent, the blocking cannot be discriminating: it becomes an all-or-nothing attack. There are some contexts in which a third party (commonly a state actor or an internet service provider, or ISP, acting as an agent of the state) has blocked all encrypted traffic to try to force the communicants to send “in the clear,” so that the third party can see what is being sent. Ignored Warnings For most users, fears about confidentiality are real—concerns about privacy include the fear that ISPs are observing what users send and selling that information for targeted advertising. However, loss of availability is also a problem, often arising from the key management system put in place for confidentiality and integrity. Encryption keys are usually distributed in what is called a certificate, a signed attestation that binds the name of an entity to a particular key. For security reasons, certificates are valid for a limited time and need to be updated periodically by the owner of the key. If the owner neglects to do so (or makes other errors), software (e.g., the user’s browser) will present warning messages that are both dire and sometimes obscure, saying that the user should not proceed because it is possible that the communication is under attack. Even worse, the warning may not give the user a backup method to continue working—the choice is to quit (a total failure of availability) or take the risk and proceed. Since users need to get their work done, and most of these failures are not malicious, users conclude (correctly, in my opinion) that the pragmatic option is to ignore the error, which has the effect of undoing the benefit of encryption in the case of an actual attack. In my view, parts of the security community have somewhat ignored the goal of availability in their pursuit of confidentiality and integrity. User Attacks on Each Other While information security among communicating users has received a lot of attention from the research community, many of the problems that plague users do not fit here. Instead, problems arise because one of the parties to the communication is not trustworthy. Most users understand that email may be spam, may contain malware in attachments, or may pretend to be from someone other than the apparent sender (so-called phishing email falls in this category). Some users may understand that connecting to a website can lead to the download of malware onto their computer. Other applications suffer from different consequences of untrustworthy actors, including false reviews of restaurants, abusive comments on sites that allow posting, and fake news. While attacks by one user on another can exploit many aspects of the internet architecture, I think the most common attacks exploit applications—spam and phishing via email, bogus restaurant reviews on Yelp, and so on. Unfortunately, while it would be nice if we could give the job of fixing all these attacks to one sort of actor (e.g., ISPs), we cannot do that. ISPs are not responsible for internet apps. Attacks have to be fixed one application at a time. Built-in Risks Why do applications allow these unwelcome behaviors? With respect to email, an application designed in the early days of the internet, the designers (including me) did not appreciate the many ways it could be abused. But with respect to more recent applications, risky modalities have sometimes been knowingly included because they provide powerful features. The Web-based attack in which malicious code is downloaded to a user’s computer would not be possible if Web protocols did not include the ability to download and execute code in the first place. When that feature was proposed, security experts were clear on the risks; the feature was added anyway because of its utility. So to make applications more secure, the challenge is not to fix implementation bugs but to change their design. That challenge is a big one, because it may require changes to a highly distributed system where the parts are under control of many actors. Despite these risks, people continue to use applications on the internet—the benefits outweigh the risks, and the benefits are substantial. For many people, the Web and email are essentially indispensable. So how do we deal with these risks? In the larger world, when actors interact but don’t trust each other, they depend on constraints on the interaction and trusted third parties to provide protection. When we buy or sell a house, we depend on registries of deeds, escrow agents, and the like to make sure the transaction is trustworthy, even if one of the actors is not. And the role of credit card processors in facilitating interactions between buyers and -sellers (and essentially insuring the transaction by reimbursing fraud) allows buyers and sellers that have no knowledge of each other to complete a transaction in a trustworthy fashion. On the other hand, people may engage in potentially risky activities because they know and trust each other (I might lend money to a friend, but not a stranger). People tend to treat friends differently from strangers. Assessing Identity How might an application be designed to provide both the advantages of powerful but risky modalities and protection from attack? Applications could be designed to modify their behavior depending on the degree of trust among the participants—an email client might allow attachments from a known party to be downloaded, but block them from an unknown sender. However, for this approach to work, it would be necessary for the application to “know” who the other parties are to a sufficient degree to determine the right degree of trust. It makes no sense to talk about whether someone is trustworthy if you have no idea who they are. This line of reasoning suggests that some concept of identity needs to be a part of internet security. Calling for better identity verification could be taken to mean that all actions on the internet should be traceable to known persons. Calls for an “accountable internet” seem to suggest that what is needed is universal strong identity associated with every action. I think that would be a very bad idea. On the one hand, it would preclude any sort of anonymous action, which is often desirable, and on the other hand, it would not work well. How confident should we be about identity credentials issued by nation-states with interests adverse to ours? What sort of recourse would we have if one of those actors did something unwelcome? In my view, identity cues need to be designed differently for different applications. Your bank really does need to know who you are; Yelp does not. A site offering medical advice about sensitive issues may commit not to identify you. For email, what may matter is not that the recipient is given proof of who the sender actually is, but that the sender is the same person as in prior messages. Over a sequence of messages, a receiver can build up a model of whether a sender is trustworthy (this version of identity is sometimes called continuity identity). But this approach to managing identity again pushes the responsibility for improved security back to the application layer. Trade-offs? Depending on the security threat, the response may differ. Consider old-fashioned physical mail. Most people would probably say that their mail should not be opened by a third party while it is in transit—mail is private. But if the envelope contains anthrax dust, it would be very good if it were opened by someone else—properly trained and in a hazmat suit. In the internet context, it might be nice if incoming mail were inspected to see if it was spam or contained -malware, but that task would be impossible if the email was encrypted, although encryption is the best way to preserve confidentiality. Attacks on the Network It is possible for a region of the internet itself to be attacked, by either another region or a user. As a packet carriage infrastructure, the internet is a global system, and it should not be surprising that some regions of it are malicious or have interests that are adverse to each other. Attacks often target a key service of the internet; the three most common services are the global routing system (the Border Gateway Protocol, BGP), domain name system (DNS), and certificate authority (CA) system. BGP Vulnerabilities Using BGP, each region of the internet tells other regions which addresses are located in that region so that packets can be sent there. If a malicious region announces that it is a good route to some addresses that don’t belong to it, traffic may flow to that region instead of the legitimate destination and the information may be examined, blocked, or manipulated in the malicious region. The vulnerabilities in the BGP (and DNS) have been known for decades—the limitation in the routing protocol was first described in 1982. So why do these vulnerabilities persist? The problem is not the lack of a technical solution. One problem is that there are competing solutions, with fierce advocates for each. Solutions differ, in part, with respect to exactly what security problem is being addressed, and it has been very hard to get agreement as to what the actual risk is. The security improvements are costly in performance (most of them involve encrypted messages among routers, which adds substantially to the overhead of processing the messages) so there is no enthusiasm for deploying them until it is clear that they are needed. And because BGP is a global system, many actors would have to agree on the solution and deploy it. Disagreements about how to improve BGP have persisted for years. There is no organization that is sufficiently in charge of the global internet that it can dictate an answer, so a workable solution is elusive. DNS Vulnerabilities The DNS maps a name (e.g., www.example.com) to an internet address. If the DNS is subverted by a malicious actor, it can be made to return the wrong address for a name so that traffic intended for a legitimate destination is instead sent to an attacker. With the DNS, there is perhaps less debate about what the threat is, but it has still been hard for the community to make progress on a solution. Encryption can help mitigate DNS problems. If one user has an encryption key that is specific to another, intended user, then a rogue clone cannot decrypt what the user sends and the malicious interception will fail. (This is a successful attack on availability, but at least no other harms will occur.) So if attackers want to penetrate an encrypted communication, they must not only deflect the traffic to the rogue endpoint but also disrupt the key management system so the user under attack has the wrong key for the corresponding party (the key for the rogue clone and not the intended endpoint). CA System Vulnerabilities On the internet, encryption keys are most commonly managed by the CA system, so attackers are -motivated to attack this system. It has not proven resistant to attack. To oversimplify, the design of the CA system was based on a technically simplifying assumption that was flawed when deployed in the real world: that the -servers that make up the CA system would be trustworthy. In a global system, that was an unrealistic assumption. Some CAs have proven corrupt, some have been penetrated, and some (specifically CAs operated by state actors) have been known to hand out false keys as part of a state-sponsored action. If attackers can both intercept traffic and hand out false keys, they can penetrate a connection that a user thought was protected by end-to-end encryption. This state of affairs holds today. Denial of Service Attacks A DoS attack disables a host, application, or region of the network by flooding it with so much extraneous -traffic that legitimate traffic is squeezed out. The attack requires many simultaneous sources of traffic and so is usually called a distributed denial of service (DDoS) attack. The first step in a DDoS attack is to penetrate and subvert many end-nodes on the network, installing malware that can later be commanded to undertake various tasks, from sending spam to participating in a DDoS attack. This collection of infected machines is colloquially called a botnet, and the person controlling it is a botmaster. There are a number of ways to counter a DDoS attack (or to disrupt a botnet). Ideally, end-nodes on the internet would be secure enough to resist takeover by a botmaster. Indeed, security on traditional per-sonal computers has greatly increased in the last decade. However, a new generation of inexpensive edge devices have come on the market—surveillance cameras, smart doorbells, and the like—many of which are designed with no thought to security. So botmasters have a new generation of easy targets to exploit. The botmaster must control his infected machines, which requires some sort of command and control system. Defenders can attempt to disrupt that system, but the internet is a general-purpose data transport network, so botmasters invent new schemes to control their -botnets as the old ones are disrupted. Botnet disruption can also be prevented by replicating the potential target system until there are enough copies that the traffic from the botnet cannot overwhelm all of them at the same time. If there are 100 copies of a service scattered across the internet and the attacker targets all of them, then his botnet has to be 100 times as large as if there were only one copy. If the attacker concentrates on one copy, it can be disabled, but that leaves 99 still running. Some of the most effective countermeasures to -botnets have involved identifying the botmasters and taking them to court. But this approach is often -thwarted by cross-jurisdictional issues and variation in law. Conclusions This picture may seem bleak. But at the level I have described, the problems are specific enough that there is a chance of action. And security is indeed getting -better, although attackers are getting better as well. Google has put in place a scheme to improve CA system security that does not require changes to the CA providers themselves. It is called certificate trans-parency, and while it has drawbacks, it is a path in the right direction. Other measures are emerging to improve email sender authenticity, and so on. A number of lessons can be drawn for the future of cybersecurity: First, while encryption is a powerful tool, it is not magic. For encryption to be successful, it must be embedded in a system that can manage keys (such as the CA hierarchy), and flaws are more likely to arise in that system than in the encryption algorithm itself. Second, some attacks are complex and multistage. A successful attack today does not come from simple exploitation of a vulnerability but may require several steps, with subversion of more than one system. This reality signals the sophistication of attackers—but also the possibility of thwarting an attack at multiple stages. Third, perfect security is not possible. Security is a multidimensional space of sometimes conflicting threats and responses. Is encryption a good idea or a bad idea in a specific context? Should an application allow a risky operation or block it? Fourth, the case of BGP illustrates two non-technical barriers to better security: coordination problems and negative externalities. To undertake an improvement to BGP, the operators of the more than 70,000 regions that make up the internet have to agree to make the change. But why should they? They bear the cost of implementation and operation, but do not themselves benefit: it is users that are harmed when -traffic is deflected by rogue routing. This is the negative externality—one actor bears the cost while another benefits. There are many examples in the cyber-security space of coordination problems and negative externalities. Finally, there is nobody in charge. To a significant degree, this fact has been a strength, not a weakness. Since the internet evolves bottom-up by consensus, it is very hard for any single actor to dominate its future. But when a collective decision is needed, making that decision can be difficult and slow. Power-ful actors (such as Google, with certificate trans-parency) are finding solutions to problems that they may hope to impose unilaterally. This outcome may be very beneficial for security, but reflects a change in internet governance toward a more centralized character. Increasing centralization (of many aspects of the internet) may be a byproduct of a push for better security. Acknowledgments I greatly appreciate the very thoughtful, detailed critique by Mary Ellen Zurko, as well as the meticulous editorial efforts of Cameron Fletcher.  In the design document describing the predecessor of BGP, the author stated: “If any gateway sends an NR [network reachability] message with false information, claiming to be an appropriate first hop to a network which it in fact cannot even reach, traffic destined to that network may never be delivered. Implementers must bear this in mind” (Eric C. Rosen, Exterior Gateway Protocol (EGP), Internet Request for Comment 827, October 1982, available at https://tools.ietf.org/html/rfc827, p. 32). The potential vulnerability was not flagged as a security risk, but more as a mistake to avoid. About the Author:David Clark (NAE) is a senior research scientist in the Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology.