To avoid system errors, if Chrome is your preferred browser, please update to the latest version of Chrome (81 or higher) or use an alternative browser.
Click here to login if you're an NAE Member
Recover Your Account Information
Author: Christian Hamer
Universities tend to be complicated organizations, diverse and decentralized, with a number of business units. The analogy of a “miniature city” is appropriate: Universities have housing, dining, retail operations, hospitals, power and other utility plants, and even police forces with real jail cells. These operations come with their own information technology systems, risks, and compliance regimes, all of which are in addition to the student, research, and administrative data that are core to supporting the institution’s mission of teaching and research.
Universities thus have a vast array of systems and data that must be protected. In addition, faculty or administrators may be targets for cyber-attackers because of the research they do, the organizations with which they are affiliated, or unpopular opinions that they share publicly. The challenge is to apply appropriate protection to the data, systems, and people without obstructing the school’s mission.
Types of Cyberattacks and Cyberattackers
Cyberattacks can be categorized into two types: unauthorized access and business disruption. Unauthorized access may result in the theft or -inappropriate exposure of confidential information. Business disruption usually involves rendering systems or data inaccessible or altering them in some way.
Cyberattackers can be categorized into four types, based on their motivations. Nation-state attackers are interested in espionage and/or disruption. Cyber-criminals are interested in financial gain. “Hacktivists” and terrorists are motivated by their cause. And hobbyists are in it for their own entertainment. While there are some groups that blur the lines a bit, especially in the nation-state and cybercriminal areas (e.g., some nation-states have pursued financial gain via cyber-attacks), most attackers fit into one of these categories.
Historically, nation-states have had access to the most sophisticated tools and techniques, followed by cybercriminals. However, advanced tools are becoming more available to the general public, making it easier for novices to carry out attacks that once required a high level of sophistication.
Elements of a University Cybersecurity Program
Universities may be subject to each type of attack and targeted by all of the groups. Furthermore, there are constantly new threats to cybersecurity: every day new vulnerabilities are discovered, as are attacks that exploit them (often with confusing or scary names, such as “dragonblood”) and new groups thought to be behind them. It can be very difficult to know what to worry about, let alone keep up with the latest attacks.
In cybersecurity efforts to defend a university’s data, systems, and people, the idea that “one size can fit all” is unrealistic. But the problem can be made manageable by breaking the complexity into simpler subcomponents (e.g., based on type of attack or attacker), using consistent, repeatable processes to understand risk, using frameworks to guide the selection of controls, focusing on the basics, and involving the whole community in the process.
First, a risk-based approach is appropriate. What does such an approach mean or look like in practice? At Harvard, we have created an annual process to gather information about risks from numerous sources across the university: the Institutional Risk Management program, a self-assessment process, our faculty and research communities, and a team that monitors and responds to cybersecurity incidents. We analyze the information and use it to evaluate and update, as appropriate, our service catalogue and our project roadmap each year.
We use an industry-recognized framework to guide our program. There are many available; the NIST Cybersecurity Framework, ISO 27000 series, and COBIT are among the most popular. Harvard uses the NIST Cybersecurity Framework (CSF), which covers five functions: identify, protect, detect, respond, recover. To help prioritize which program elements to address first, we use the Center for Internet Security’s Critical Security Controls (CIS Controls), which map to a subset of the CSF functions.
Whichever framework one chooses, it is critical to do the basics first. While that statement may sound obvious, it can be easy to get lost amid the challenges of the latest attack or product and ignore the importance of cybersecurity “blocking and tackling”: understanding the computer and software assets that must be protected, keeping them protected and patched regularly, and ensuring strong identity and access management processes.
Finally, we believe it is critical to involve and work with our community as much as possible. We view our role as enabling the mission of the university. Therefore, we need to devote time to ensure that we understand what our partners in the community are trying to accomplish, what their challenges are, and how we can help them meet their goals securely. This approach is much more effective than being perceived as a -barrier, and is critical to successfully partnering with our community.
Solutions That Work
Among hundreds of products, technologies, projects, and initiatives, three stand out as the most effective in our environment: multifactor authentication, endpoint detection and response, and end user awareness. I will describe each, explain why we implemented it at Harvard, and report what results we observed. Every environment is different, of course, and these may not be appropriate in other environments, but they have proven very effective in ours.
What Is It? Why Use It?
Three “factors” can be used to authenticate the user of a computer or application: “something you know” (e.g., a password), “something you have” (e.g., a phone or token), and “something you are” (e.g., a fingerprint). Multifactor authentication (MFA) requires two or more of these factors for a user to log in to a computer or application. Multiple versions of the same factor (e.g., security questions in addition to a password—both “something you know”) are not, by definition, MFA.
Over the past couple of decades, time-based one-time password tokens that generate a different number every minute were the most well-known implementation of “something you have,” used with a password (“something you know”) to provide a second authentication factor. Today, mobile devices are often used instead of a token, through a smartphone application or even a text message.
At Harvard, we implemented MFA because we had a problem with accounts compromised by phishing, password reuse, and brute force (a trial-and-error method of password guessing). While there are means to address each of these issues, we didn’t think any of them would be as effective or efficient as adding a second factor to passwords for authentication.
How to Get User Buy-In
Implementation of MFA was a big change for our community, so we strongly emphasized the user experience. This focus started with the name: We called the new approach two-step verification because that’s the term Google had used and it was familiar to at least some in our community. There was no reason to introduce another technical term when we could use a familiar one.
The next challenge was with onboarding. During user testing with the native experience offered by our -vendor, it became clear that it was not simple enough and would not allow us to deploy at scale without a negative impact on the community. So our Identity and Access Management team developed a simpler and -easier interface that they integrated with the vendor’s product. Once we had simplified the experience as much as possible, we invested a lot of time and effort in clear and complete documentation in a variety of -formats, from checklists to videos.
Our last challenge was to make sure that our community understood both why the new approach was important and that it would be easy for them to use. Most of the community wasn’t aware of the attacks we were experiencing and, if they were aware, didn’t think they could or would be a target. We changed that by relating anonymized stories of real incidents that had happened to others on campus, and made it clear what might be at stake for them personally. Then we talked about how easy and unobtrusive the solution was, addressing some of the perceived pain points we had heard about.
Finally, we brought the solution to the community in the form of clinics across the university to ensure that those who wanted help with onboarding had very easy and convenient access to it. In addition to the clinics, we provided documentation and support. We -onboarded some 65,000 members of our community in about seven weeks.
Our success was largely due to our focus on the user experience. This project demonstrates that people generally want to do the secure thing and the best way to leverage that intent is to make it as easy as possible for them to do so.
The problem we set out to solve was that of compromised accounts. The results have been exactly as we hoped: In the nearly three years since we rolled out MFA, the number of protected accounts that have been compromised is almost zero.
Endpoint Detection and Response
Like most organizations of a few years ago, we relied on various configurations of antivirus and antimalware to protect our systems and as a means to detect intrusions. While these technologies were never perfect, they seemed adequate for detecting simple attacks. However, most of them are based on the concept of “enumerating badness”: they use cryptographic signatures of known malicious software as a means of detection. Such detection requires that someone has already identified the software as malicious. It also means that the databases of known malicious software will continue to grow at an accelerating rate.
This approach provides attackers with at least two opportunities. The first is to slightly modify their malicious software so that its signatures go from “known malicious” to “unknown.” In fact, there is a category of software (called “packers”) that obfuscates malicious software so that it will not be detected by signature-based tools.
The second opportunity for attackers is to “live off the land”: to use the tools and accounts of legitimate system administrators (used to operate and maintain systems) to attack the systems. This is a practice favored by more sophisticated attackers and against which traditional antivirus and antimalware are completely ineffective.
Our Experience with EDR
We confronted a series of attacks that used both of these techniques and quickly realized that we lacked visibility into when or where the attacks were happening. At the same time, new endpoint detection and response (EDR) products were evolving. These products analyze the behavior of software, not just its cryptographic signature, and look for anomalies and patterns that are indicative of malicious activity. That makes them equally adept at catching both malicious software that has been slightly modified and attackers using built-in tools for system administration in malicious ways.
When we deployed EDR on our computer systems, it did two things. First, and most importantly, it gave us the visibility we needed into the kinds of malicious activity described above. Second, it highlighted how poor a job the more traditional protective tools were doing. While EDR doesn’t always stop attacks on its own, it provides critical early warning of malicious activity. This allows us to take steps to stop the activity before the attackers can move to their ultimate targets. An important feature is that EDR has no impact on our users unless there is a real security issue.
While EDR may be a point-in-time technology that is replaced in the future, it has earned its place on this list by helping us detect and quickly respond to sophisticated attacks significantly better than any other technology we have tried and without any negative impact on our community.
End User Awareness
End user awareness has long been a controversial topic in cybersecurity circles. People often say that “users are the weakest link,” and it is true that many attacks focus on end users (“phishing” may be the most well known of them). But we have found it valuable to treat our community as a resource and a partner rather than consider them “the weakest link.”
From User Awareness to Action
We know that our community members want to do the right thing and just need the proper resources to know what the right thing is and how to do it. Our goal is to identify a small set of behaviors that we can influence to significantly increase security outcomes. We work on influencing behavior because awareness without action does no good. We called our campaign “Small Actions, Big Difference” to emphasize the simple things people can do to make themselves a lot more secure.
We adopted the Fogg Behavior Model as a framework for the campaign. According to this model, -behavior change requires motivation, ability, and prompts. While the prompts are important—users have to recognize the situation—we have found that the motivation and -ability are the components on which we need to focus the most.
To ensure that we provide users appropriate motivation to take a desired action, such as applying updates when they are available, we highlight the benefits to the user. Emphasis on the personal benefits of a particular action has been the most effective approach for us.
Realistic Requests for Action
The “ability” component of influencing behavior addresses two questions, an obvious one and a more subtle one that can be overlooked. The obvious question is “Does the user have the skills and abilities to take the desired action and have we made the process easy enough that s/he can follow it?” This again speaks to the criticality of good user experience in fostering security.
The more subtle question is “Is what we are asking realistic?” For example, the advice “don’t click on links in emails or open attachments” may be realistic for some, depending on their job function, but it is not realistic for most people. We always want to make sure we do not provide advice that sounds simple but is effectively impossible to follow.
Asking the community to “click wisely” is an important part of our awareness campaign. Although it will never be 100 percent effective, it is possible to enlist the people who do recognize phishing emails to protect those who do not. When even one person reports these emails to us, we can take technical steps to protect the entire community. Now thousands of messages are forwarded to us each month and fed into an automated processing system to prioritize the most important ones for immediate action.
Talent and Collaboration
Recruitment and retention of talented people are a challenge in cybersecurity. One of the most effective ways to meet that challenge is to look for people who have diverse backgrounds and experiences and can help find novel solutions to problems.
Because universities, given their scope and scale, offer some of the most interesting problems in cyber-security and have a huge spectrum of systems con-nected to their networks, they offer a variety of challenges for those interested in cybersecurity. And it’s in keeping with the mission of higher education to invest in training and development. The diversity of challenges and focus on professional development are among the benefits that universities can offer information security professionals.
Universities also value collaboration and information sharing. In my experience, those in higher education are willing to take time to help others learn, within the team, across departments, or across universities. Higher education has a formal Information Sharing and -Analysis Center (ISAC) that is used to exchange information and best practices. The spirit of collaboration and sharing is one of the things that I and my colleagues most treasure about higher education.
What’s on the Horizon?
I mentioned above that multifactor authentication was an effective means to prevent certain unauthorized access attacks. However, there are no “magic silver -bullets” in information security, so it is not a surprise to find that this technology has flaws that are starting to be exploited.
MFA implementations that use a phone call or text message are vulnerable to attacks in which criminals trick a mobile phone service provider into moving a number to a different phone. Attacks like these have been responsible for the losses of large sums of cryptocurrencies (Chavez-Dreyfuss 2019). Though less common, attackers have also set up fake portals to capture and “relay” MFA responses, and it is likely these will occur more often. As these attacks evolve, technologies will change to phase out phone calls and emails in favor of more secure means of authentication.
Business disruption attacks typically destroy data and systems, as happened in the well-publicized Sony -Pictures Entertainment attack of 2014 (Peterson 2014). Business disruption is also the effect of ransomware, wherein files are locked by an attacker and promised to be returned in exchange for payment, often in cryptocurrency that is difficult to trace.
The next iteration of these attacks is likely to be those that alter data in subtle but important ways. Recently, researchers used artificial intelligence (AI) to add the appearance of cancerous regions to 3D CT scans that fooled medical professionals (Mirsky et al. 2019). Such attacks will probably become more common, so the industry needs to increase efforts to protect and prove data and system integrity.
Vendors in the cybersecurity space have been promoting AI and machine learning as the solutions to today’s and tomorrow’s problems. There are some practical applications of machine learning to analyze large datasets and produce actionable information, and these technologies promise to make cybersecurity analysts much more productive. It is natural, then, to expect that attackers may use the same techniques to make their tools and attempts more effective.
The challenge of information security in higher education is to protect the critical data, systems, and people in support of—and without obstructing—the mission of teaching and research. To confront that challenge, the information security team at Harvard tries to simplify the complexity, use a repeatable risk-based approach to identify and mitigate risks, and use a framework to organize and guide controls, focusing on the basics and involving the community in the process. One of the most important contributing factors to our most successful projects has been ensuring a good user experience.
With the pace of change in cybersecurity continually increasing, and new threats always on the horizon, the path to success is in simplifying the landscape, making good security easier, and identifying and executing on the most important priorities.
Chavez-Dreyfuss G. 2019. US investor awarded $75 million in cryptocurrency crime case. Reuters, May 10.
Mirsky Y, Mahler T, Shelef I, Elovici Y. 2019. CT-GAN: -Malicious tampering of 3D medical imagery using deep learning. 28th USENIX Security Symposium, Aug 14–16, Santa Clara. Online at https://arxiv.org/pdf/1901.03597.pdf.
Peterson A. 2014. The Sony Pictures hack, explained. -Washington Post, Dec 18.