To avoid system errors, if Chrome is your preferred browser, please update to the latest version of Chrome (81 or higher) or use an alternative browser.
Click here to login if you're an NAE Member
Recover Your Account Information
Author: Josephine Wolff
One of the recurring themes in discussions of cybersecurity is how rapidly the landscape of threats is evolving and how difficult it is for defenders to keep pace with ever-changing attack vectors and vulnerabilities. While it is true that security threats and controls change over time as technologies continue to develop, that idea sometimes leads to the dismissal of older security breaches as irrelevant in the context of trying to defend against tomorrow’s threats. But there is considerable value in reexamining past attacks for clues about the ways they remain relatively static over time and how those patterns can be used to do a better job of defending against future attacks.
Past incidents are particularly helpful in efforts to understand the strengths and weaknesses of the many recommended cybersecurity best practices. There are recommendations issued by government agencies, industry consortiums, international standard-setting organizations, private companies, nonprofits, and individual security experts. With the wealth of recommended best practices, many organizations are unsure which policies and security controls to implement.
This article looks at selected cybersecurity incidents through the lens of the perpetrators’ motivations—financial theft, espionage, and public humiliation of the victims—to show how they shape the trajectory of the incidents, creating repeated attack patterns over time. These patterns can help guide defensive interventions aimed at preventing similar incidents. They also shed light on why existing cybersecurity best practices have often failed to clarify effectively the essential security responsibilities and controls for organizations.
Although recommended practices provide a useful starting point for organizations trying to implement stronger security protections, they are designed to target very specific stages of breaches and draw on the capabilities of only a single stakeholder involved in the incident—the organization directly targeted. This -narrow focus limits the effectiveness of the best practices because they take advantage of only one small part of both the larger defensive ecosystem and the range of involved stakeholders online.
The recommended practices can also be extremely challenging for organizations to adopt and implement. For instance, the most recent, fourth revision of the NIST 800-53 catalogue includes 115 low-impact security controls, 159 moderate-impact controls, and 170 high-impact controls—more than 440 options. For a small or medium-sized enterprise looking for guidance, this can be an oppressive and impractical quantity of controls.
Furthermore, the lack of empirical evidence that these best practices actually reduce the risk of cybersecurity incidents makes it difficult for organizations to decide which ones to use.
What Drives Cyberattacks?
There are many ways to categorize cybersecurity incidents—according to their targets, their perpetrators, or the technical exploits the perpetrators use, for instance. Each classification scheme is useful in different contexts, such as trying to understand which types of organizations are most likely to be targeted, or which criminal groups require the most attention from law enforcement, or which technical vulnerabilities are most frequently exploited. For identifying attack patterns, or repeated sequences of behavior that persist over years and years in, it is especially useful to consider perpetrators’ motives because those goals shape the final, and most essential, stages of their attacks.
Types of Motivation
The three classes of attacker motivation discussed here—financial gain, espionage, and public humiliation—are not comprehensive or exclusive. For instance, the -Stuxnet worm was used for none of these purposes but instead to cause physical damage to an Iranian uranium enrichment plant (Zetter 2014). This goal of physical sabotage is yet another potential attacker motivation—one that may become increasingly common as more physical devices are connected to the Internet of Things and it becomes possible to control them through computer networks. However, at present, there are too few examples of such incidents to allow for extensive analysis.
Profit, espionage, and public shaming are not mutually exclusive motives. For instance, in the cases of both the 2014 Sony Pictures data breach and the 2015 -Ashley Madison breach, the theft of data was motivated by the perpetrators’ desire to publicly shame the victims, but the resulting dumps of sensitive information were then used by others to commit financially motivated crimes involving identity theft and financial fraud. Similarly, the GameoverZeus botnet operated by Evgeniy Bogachev’s organization in Russia was used by Bogachev and his associates to steal millions of dollars through the Cryptolocker ransomware, but some of the information was provided to the Russian government to aid espionage efforts (Schwirtz and Goldstein 2017).
Motivation and Methods
Despite the potential for overlap, goals still provide a useful framework for considering how best to defend against future attacks because the final attack stages are often not replaceable for the perpetrators and are therefore repeated year after year, breach after breach, even as computing technology evolves.
Many of the earlier, technical stages of the breaches that enable an intruder’s initial access to target systems (e.g., phishing, exploiting software vulnerabilities, or stealing credentials) are easily replaced: If one avenue of access is cut off by effective security controls, the perpetrators simply switch to another. But the final stages of their attacks—when they carry out their ultimate aims—are not so easily substituted to achieve the same goal. That makes these stages the most crucial for defenders to cut off and also the most static over time.
Financially Motivated Incidents
Most reported cybersecurity incidents are motivated by money (Verizon 2019). The specific techniques whereby cybercriminals steal data they can use to make money have certainly evolved over time, but the desire to make money and the mechanisms for converting -stolen data into financial gain have changed much more slowly. One of the earliest and most common forms of these crimes is the theft of large volumes of payment card information that can be sold on the black market and used for large-scale financial fraud.
The TJX Breach (2005)
The 2005 breach of TJX, Inc. by Albert Gonzalez and a few coconspirators resulted in the theft of 45.6 million payment card numbers, making it one of the largest such breaches at the time of its discovery.
Gonzalez and his team had first identified the firm as a potential target while “wardriving” on a Miami highway using a long radio antenna to detect insecure wireless networks. They found a Marshalls store, owned by TJX, that had a wireless network encrypted using older, less secure WEP encryption. They parked in the store’s lot and proceeded to collect packets off the wireless network until they had sufficient information to reverse-engineer the encryption key and retrieve in plaintext the necessary credentials to connect to the company’s main servers in Framingham, MA (Verini 2010).
Several payment card networks and issuing banks that bore the brunt of covering the costs of the resulting payment card fraud sued TJX for its negligence in failing to secure the card data more effectively. The lawsuits focused on the company’s failure to encrypt its stores’ wireless networks using WPA encryption.
But TJX’s failure to use up-to-date WiFi encryption was only one missed opportunity in a long chain of events that led up to the execution of the breach—from packet sniffing, credential theft, and remote -logins on TJX servers to exfiltration of stored payment card data and the manufacture and sale of fraudulent payment cards. The insecure wireless network was no more responsible for the entirety of the breach than any of the other decisions by TJX and other stake-holders that enabled the later stages of the breach. And yet, because WPA encryption was specified in the Payment Card Industry Data Security Standard (PCI DSS) at the time, both the media and the courts viewed that -particular decision as, in many ways, the crucial mistake that demonstrated the company’s negligence (Wolff 2018).
The recommended best practices that organizations are routinely faulted (in the aftermath of a breach) for not implementing often do not address the stages of an incident that are most susceptible to effective defensive interventions. In this case, the emphasis on wireless network encryption in the legal proceedings that followed the breach shifted focus away from the incident’s monetization stages in which Gonzalez and his coconspirators sold the stolen data and repatriated their profits to the United States—the process by which law enforcement officials were ultimately able to identify and arrest them (Verini 2010).
The SCDOR Breach (2011)
As explained above, often it is not clear which practices an organization should adopt. In the wake of the 2011 breach of millions of tax records from the South -Carolina Department of Revenue (SCDOR) -critics questioned whether the SCDOR had adhered to required standards and recommended best practices. Then–South Carolina governor Nikki Haley blamed the IRS for failing to instruct the state to encrypt its tax records. The IRS, in turn, invoked NIST as the responsible entity for setting technical security standards for government agencies. Meanwhile, the South Carolina state legislature expressed outrage that the SCDOR had not required multifactor authentication, which might have defended against the phishing attack that initiated the breach (Mandiant 2012).
The uncertainty around which stakeholders were responsible for determining security best practices and which best practices were essential for the protection of sensitive information contributed to the inability to clarify responsibility and liability for the incident.
The GameoverZeus Botnet and Cryptolocker Ransomware (2013–2014)
The GameoverZeus botnet that distributed the -Cryptolocker ransomware program in 2013 and 2014 was able to bypass many forms of multifactor authentication and monetize previously worthless data by selling it back to its original owners for a cryptocurrency -ransom. The cryptocurrency payments allowed the perpetrators to evade the centralized financial inter-mediaries, such as banks and payment card networks, that had powerful defensive forces to combat payment card fraud and identity theft. And because Cryptolocker targeted individuals’ computers it distributed the costs to thousands of disparate users rather than concentrating them in the fraud ledgers of credit card companies and issuing banks. This cost distribution diminished the incentives for any intermediaries to intervene and eliminated the class action lawsuits that had driven liability regimes and the associated need for clear recommended best practices to distinguish between negligent and unlucky breach victims.
Liability and Cyberinsurance Challenges
The challenge with relying on catalogues or lists of best practices to determine even partial liability for security incidents is that it narrows the scope of organizations’ security responsibilities to relatively confined, non-exhaustive recommendations that are often based more on generally accepted consensus than on empirical evidence. In fact, not all recommended security controls provide measurable improvements in system security, and lists of best practices can sometimes be used to propagate controls (e.g., requirements to regularly change passwords) that do more harm than good in the long term (Wolff 2016).
The lack of clear correlation between security best practices and security outcomes is a significant challenge for the rapidly growing number of insurance firms offering cyberinsurance policies. These firms typically do not have the in-house security expertise that would allow them to audit potential customers’ security postures or identify the safeguards those customers should take as part of their coverage, and find they cannot rely on existing best practices as guidance. This has led many insurers to partner with security firms to provide customer assessments and security services (Wolff and Lehr 2018).
The monetization stages of financially motivated cybercrimes have traditionally been the most vulnerable to defensive intervention because they relied on a small set of intermediaries, such as well-known online black markets for stolen cards and fraudulent card manufacturers. For instance, law enforcement authorities were able to take down Gonzalez’s operation through his fence for stolen card data, a man named Maksym Yastremskiy. Later models of financial cybercrime shifted to enable other types of monetization that relied less on fencing operations like Yastremskiy’s.
Defending against political or economic cyberespionage typically requires restricting the exfiltration of data and segmenting sensitive portions of networks to contain intrusion attempts and subsequent access to high-value information. But adhering to these best practices is not always sufficient, especially when the espionage efforts are state-sponsored and carried out by actors with considerable expertise and resources.
The following cases highlight both the vulnerability of a company that rigorously followed recommended security standards and the absence of serious consequences for organizations that failed to do so. These examples do little to encourage others to pay more attention to these recommendations.
The Case of DigiNotar (2011)
The Dutch certificate authority DigiNotar was compromised in 2011 as part of what was later hypothesized to be an espionage operation by the Iranian government (Hoogstraaten et al. 2012). An intruder penetrated -DigiNotar’s multiple lines of defense to generate rogue certificates for Google’s domain, and these certificates were then likely used to capture credentials for thousands of Iranian Google accounts (Hoogstraaten et al. 2012).
The DigiNotar compromise is an example of how many stages can be involved in completing espionage attacks. Not only did the perpetrators have to compromise a certificate authority, they then had to redirect users to fraudulent web pages, probably through Domain Name System (DNS) cache poisoning, to use the rogue certificates they had generated (Hoogstraaten et al. 2012).
Besides highlighting the numerous opportunities for defensive intervention, the DigiNotar attack illustrates the limitations of adhering to industry best practices. DigiNotar had a rigorously segmented network structure to separate the high-security certificate-issuing portion of the network from the company’s outward-facing web presence. Every request for a new certificate had to be approved by at least two company -employees and the servers used to generate certificates were stored in a secure room that could be accessed only by using biometric hand recognition, a key card, and a PIN (-Hoogstraaten et al. 2012).
DigiNotar’s security setup reads like an excerpt from a manual on how to design a secure system, and yet the intruders were able to find a way to tunnel into the most secure portion of the network and generate rogue certificates.
The OPM Data Breach (2015)
In other cases of espionage, organizations demonstrated much less rigorous adherence to basic tenets of security best practices. A series of espionage efforts led by the Chinese People’s Liberation Army (PLA) Unit 61398 and directed at more than 100 private companies and government agencies, primarily based in the United States, were described in a 2013 report by security firm Mandiant. Some of the incidents were confirmed in an indictment of several PLA officers the following year; like the Mandiant report, the indictment pointed to phishing emails and other social engineering campaigns as the primary means of access for the state-sponsored economic espionage efforts (Mandiant 2013).
In 2015, in another espionage operation attributed to the Chinese government, the US Office of -Personnel Management was breached and information about 21.5 million people who had worked for the federal government or received security clearances was stolen (Chaffetz et al. 2016). Congressional hearings highlighted OPM’s lack of encryption, multifactor authentication, and intrusion-monitoring technology, but in a move reminiscent of the SCDOR breach aftermath, OPM officials deflected blame to the Department of Homeland Security, NIST, the Office of Management and Budget, and other agencies that they felt had -hindered their ability to implement security upgrades or establish clear expectations for information security (Chaffetz et al. 2016).
Public Shaming Incidents
Security incidents aimed at publicly shaming the victims often involve publicly denouncing or embarrassing the target before as large an audience as possible. This broadcasting stage can be an especially tricky element of cyberattacks to regulate because such regulations may resemble speech or press restrictions that run counter to many countries’ fundamental principles.
The Sony Pictures Breach (2014)
When North Korea breached Sony Pictures in 2014 and released large volumes of stolen emails and internal records, several people, including lawyers hired by Sony, suggested that it was, or should be, illegal for reporters to write about the stolen information because it supported the mission of the attackers and infringed on Sony’s intellectual property (Boies 2014). While it was true that the widespread media attention to the breach aided North Korea’s supposed mission of humiliating Sony Pictures and undermining the company’s business, regulating the media and online intermediaries that helped distribute the stolen information would have been a problematic solution to a cyberattack that aimed to spread discord and wreak havoc on its target.
Spamhaus DDoS (2013)
By contrast, the massive distributed denial of service (DDoS) attacks against Spamhaus in 2013, by a group frustrated by the organization’s widely used spam blocklists, did not rely on media coverage or popular attention. Rather, they depended on DNS operators that had failed to restrict their servers to only resolve queries from computers in their administrative domains. These open resolvers enabled the Spamhaus attackers to send DNS queries that pretended to be from -Spamhaus servers, causing the open DNS servers to respond to -Spamhaus with large DNS files that bombarded the antispam organization’s servers and forced them offline (Prince 2013a).
Not operating open resolvers was a known security best practice for DNS operators but there were inade-quate incentives for many of them to adhere to the practice. At the time of the Spamhaus DDoS attacks, the Open Resolver Project estimated there were 21.7 million open resolvers online (Prince 2013b). But it was Spamhaus, not the open resolver operators, that bore the brunt of the DDoS attacks, pointing again to the challenge of incentivizing organizations to implement security standards when that investment does not directly benefit them.
Lessons and Takeaways
Several recurring themes and lessons emerge from the high-profile cybersecurity incidents of the past decade and the failure of best practice recommendations to prevent them.
First, it is often extremely difficult for organizations to navigate the many sets of security best practices both because there are so many and because so few data exist to indicate which are actually effective at preventing or mitigating bad outcomes.
Second, the uncertainty about which best practices to follow creates a loophole of sorts for breached organizations, such as the SCDOR and OPM, to blame other agencies for failing to tell them exactly which security controls they should have been using.
Third, the TJX and SCDOR breaches show that regulators and courts are often reluctant to close the loophole by clarifying the security expectations for firms because they do not want to be responsible for dictating those expectations. Although this approach makes clear that there is no set combination of controls that obviates an organization’s liability, it makes it more difficult for organizations to determine which recommendations they should follow.
Fourth, examining the controls that courts, legislative hearings, and media reports most often emphasize in cases such as those discussed in this article reveals a tendency to highlight the absence of controls that might have blocked early, technical stages of the intrusions rather than more systemic interventions that would have involved third-party intermediaries, law enforcement, and/or regulators. This is in part because those controls can be most easily implemented by the breached party rather than requiring cooperation from other stake-holders. But that is exactly what makes them less effective and more likely to address stages of attacks that can easily be substituted for new attack vectors by perpetrators.
Improving cybersecurity best practices requires defining clearer guidelines with less onerous implementation, collecting better data on their efficacy, and fighting against the narrative that any given security incident is the result of one particular missing control. More than that, though, it requires a more comprehensive understanding of the entire security ecosystem that these best practices aim to strengthen, so that recommendations for merchants, nonprofits, and government agencies are not developed separately from those for DNS operators, payment processors, or browser manufacturers.
So long as best practices are limited in scope to individual organizations and do not include mechanisms to enable cooperation and support from all stake-holders, they will continue to serve only a narrow function and defend against only a small subset of the stages of cyberattacks.
Boies D. 2014. RE: Your possession of privileged and/or confidential information stolen from Sony Pictures Entertainment. Deadline, Dec 15.
Chaffetz J, Meadows M, Hurd W. 2016. The OPM Data Breach: How the Government Jeopardized Our National Security for More than a Generation. House Committee on Oversight and Government Reform, 114th Congress. Washington.
Hoogstraaten H, Prins R, Niggebrugge D, Heppener D, Groenewegen F, Wettinck J, Strooy K, Arends P, Pols P, Koupprie R, and 3 others. 2012. Black Tulip report of the investigation into the DigiNotar Certificate Authority breach. Delft: Fox-IT BV.
Mandiant. 2012. South Carolina Department of Revenue: Public Incident Response Report. Alexandria VA.
Mandiant. 2013. APT1: Exposing One of China’s Cyber -Espionage Units. Alexandria VA.
Prince M. 2013a. The DDoS that knocked Spamhaus offline (and how we mitigated it). Cloudflare Blog, March 20.
Prince M. 2013b. The DDoS that almost broke the internet. Cloudflare Blog, March 27.
Schwirtz M, Goldstein J. 2017. Russian espionage piggybacks on a cybercriminal’s hacking. New York Times, March 12.
Verini J. 2010. The great cyberheist. New York Times, Nov 10.
Verizon. 2019. Data Breach Investigations Report. New York.
Wolff J. 2016. Perverse effects in defense of computer systems: When more is less. Management Information Systems 33(2):597–620.
Wolff J. 2018. You’ll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches. Cambridge: MIT Press.
Wolff J, Lehr W. 2018. Roles for policy-makers in emerging cyber insurance industry partnerships. TPRC46: Research Conference on Communications, Information and Internet Policy, Sep 21, Washington.
Zetter K. 2014. Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon. New York: Crown Publishing Group.
 Available from the National Institute of Standards and -Technology National Vulnerability Database, online at https://nvd.nist.gov/800-53.