Click here to login if you're an NAE Member
Recover Your Account Information
Author: Kevin Fu
Cybersecurity shortfalls in medical devices trace to decisions made during early engineering and design. The industry is now paying the cybersecurity “technical debt” for this shortsightedness.
Computer networking, wireless communication, wireless power, the Internet, and a host of other engineering innovations, combined with electronic health records and the reengineering of clinical workflow, have enabled innovative therapeutics and diagnostics—but at the cost of information security and privacy, or cybersecurity.
Complexity breeds technological insecurity. In the last few decades, medical devices have evolved from simple analog components to complex digital systems with an amalgam of software, circuits, and advanced power sources that are much more difficult to validate and verify. Whereas a classic stethoscope depended on well-understood analog components, modern devices such as linear accelerators, pacemakers, drug infusion pumps, and patient monitors depend critically on computer technology.
When a medical device is compromised, its behavior becomes unpredictable: the device may deliver incorrect diagnostic information to clinicians, be unavailable to deliver patient care during repair, and, in extreme cases, cause patient harm. Lack of security may even introduce an unconventional dimension of risk to safety and effectiveness: intentional harm.
Much cybersecurity risk is attributable to legacy medical devices dependent on Windows XP and other unmaintainable operating systems that no longer receive security patches (figure 1). But proprietary embedded systems are no less vulnerable.
Complexity introduced at the design stage is the root cause of many cybersecurity problems, not hackers. Complexity increases the attack surface, the points of unintended access to a computer system. By uncovering the implications of the flaws baked in from early engineering choices, hackers are merely the “collectors” and messengers of this cybersecurity technical debt.
Brief History of Medical Device Security
Research: Case Studies
There’s a rich history of efforts to ensure trustworthy medical device software (Fu 2011). The classic and eye-opening Therac-25 study showed how a linear accelerator caused a number of injuries and deaths from massive radiation overdoses in the late 1980s and early 1990s (Leveson and Turner 1993). While project mismanagement, complacency, and overconfidence in unrealistic probabilities played a role, the most interesting root cause was the adoption of poorly designed software instead of well-understood analog components to safely control the radiation delivery.
More recently, research on ways to improve medical device security led to an interdisciplinary paper on the security of an implantable cardiac defibrillator (Halperin et al. 2008). The study took several years because of the interdisciplinary nature of the problem and clinical challenges such as attending live surgery to fully understand the threat model. In the paper my colleagues and I demonstrated that it was possible to wirelessly disable the device’s life-saving shocks and induce ventricular fibrillation (a deadly heart rhythm).1
We articulated the engineering principle that a secure medical device should not be able to run an operation that causes it to induce the hazardous state (in this instance, ventricular fibrillation) it is designed to prevent. The paper also includes a number of defensive approaches primarily centered on the concept of zero-power security, requiring the provision of wireless power to ensure that the implant can protect the availability of its precious battery power. And, importantly, our research showed that, despite the security risks, patients predisposed to health risks who are prescribed a wireless medical device are far safer accepting the device than not.
The Role of Hackers
A few years later, the hacker community began to replicate academic experiments on medical devices. Barnaby Jack famously replicated our pacemaker/defibrillator experiment in a manner more appealing to the general public. Although formal peer-reviewed proceedings were rare, the hackers gave captivating talks and pointed demonstrations that attracted attention to the subject. The hacker community began to find new security flaws in medical devices such as insulin pumps and infusion pumps (e.g., demonstrations by Billy Rios, Barnaby Jack, Jay Radcliffe, Scott Erven, and others). The hacker community uncovered security vulnerabilities that have led to unprecedented FDA actions.
National Facilities for Medical Device Security
To promote deeper intellectual inquiry into medical device security, I created the Open Medical Device Research Library (OMDRL) to collect and share hard-to-find implants with security researchers. Unfortunately, the demand did not justify the high cost of biohazard decontamination, and computer science staff were uncomfortable with managing biohazard facilities, so the library was short lived. However, researchers from MIT did engage with the OMDRL to invent a novel radio frequency (RF) jamming protocol that blocks legacy implanted cardiac devices from transmitting insecure “plaintext” messages and overlays an encrypted version (Gollakota et al. 2011).
Device manufacturers have difficulty testing beyond the component level because of (1) the diverse array of configurations and interoperating medical devices and (2) uncertain risk to patients during live testing. For this reason, the OMDRL is adapting from a library to a testbed at the University of Michigan.
The notional Bring Your Own Hospital (BYOH) testbed, as part of the Archimedes Center for Medical Device Security (secure-medicine.org), will enable security testing and experimentation on systems of medical devices with automated and highly configurable threat simulators to better prepare manufacturers and hospitals to cope with the changing threat landscape. Efforts will include control studies to compare the effectiveness of different hospital information security policies, and emergency preparedness “fire drills” to train manufacturers and clinicians on how to respond to cyberattacks and malware infections that affect the timely delivery of care. The first experiment involves mapping out the infection vectors created by reuse of USB drives to understand how fast infections can spread in clinical facilities and determine the most effective ways to control an outbreak.
Recent Federal and Other Measures
In July 2015, a few days after the National Highway Traffic Safety Administration issued the first recall of an automobile solely because of a cybersecurity risk (Kessler 2015), Hospira became the first medical device company to receive an FDA (2015) safety communication because of a cybersecurity risk. Although not legally a recall, the FDA notice had a similar effect: the agency strongly discouraged healthcare facilities from purchasing the company’s infusion pump because of a cybersecurity vulnerability that could let hackers induce over- or underinfusion of drugs and thus potentially cause patient harm.
In addition, FDA (2014) premarket guidance on cybersecurity calls for a technical cybersecurity risk analysis in all applications for premarket clearance to sell medical devices in the United States. And the FDA is expected to release a postmarket guidance document on coordinated vulnerability disclosure, incident reporting, and continuous surveillance of emerging cybersecurity risks. The preparation of this document is more complicated because it involves a number of unusual bedfellows, ranging from the vulnerability research community to the Department of Homeland Security to the US Computer Emergency Response Team.
Complementing these federal measures, the Association for the Advancement of Medical Instrumentation (AAMI) sets the major standards for medical device safety. The AAMI medical device security working group consists of both healthcare providers and medical device engineers who have written a technical information report (TIR 57) (currently under ballot) that provides much-needed advice to engineers on how to think methodically about cybersecurity across the product development lifecycle of a medical device.
Many of the vulnerabilities and solutions (figure 2) for medical device security involve analog cybersecurity. Cybersecurity risks that begin in the analog world can infect the digital world by exploiting semipermeable digital abstractions. Analog cybersecurity is the focus of research on side channels and fault injection attacks that transcend traditional boundaries of computation. Security problems tend to occur in boundary conditions where different abstractions meet. In particular, the analog-digital abstraction poses subtle security weaknesses for cyberphysical systems such as medical devices and the Internet of Things.
Researchers have demonstrated how an adversary can violate widely held computing abstractions as fundamental as the value of a bit. For example, ionizing radiation and computing faults cause smartcards and processors to divulge cryptographic secrets (Boneh et al. 2001; Pellegrini et al. 2010). Intentional electromagnetic interference causes sensors to deliver incorrect digital values to closed-loop feedback systems such as pacemakers (Kune et al. 2013). Acoustic and mechanical vibrations cause drones to malfunction by hitting the resonant frequency of a microelectromechanical systems (MEMS) gyroscope (Son et al. 2015). The row hammer attack2 enabled malicious flipping of bits in computer memory to adjacent physical rows of memory (Kim et al. 2014). The GSMem paper (Guri et al. 2015) shows how computer memory can emit RF signals in cellular frequencies.
Such analog cybersecurity weaknesses will likely remain a significant challenge for automated systems such as medical devices. Traditional research and education in cybersecurity focus on software flaws and solutions. I believe that threats to the digital abstraction represent the next frontier of security engineering. The Internet of Things and cyberphysical systems such as medical devices, automobiles, and aircraft have awakened interest in analog threats affecting digital vulnerabilities that have physical consequences.
Medical devices help patients lead more normal and healthy lives. The innovation of such devices results from a complex interplay of medicine, computer engineering, computer science, human factors, and other disciplines, and this complexity breeds design-induced cybersecurity risks.
The greatest near-term risk is old malware that accidentally breaks into clinical systems running old operating systems, causing damage to the integrity and availability of medical devices and in turn interrupting clinical workflow and patient care. While targeted malware will likely become a problem in the future, medical devices are currently challenged by basic cybersecurity hygiene, such as hospitals spreading malware with USB drives or vendors infecting their own products by accident.
To enhance the trustworthiness of emerging medical devices and patients’ confidence in them, manufacturers need to address cybersecurity risks during the initial engineering and design, and maintain postmarket surveillance throughout the product lifecycle.
This work is supported in part by the Archimedes Center for Medical Device Security and the National Science Foundation under the Trustworthy Health and Wellness project (THAW.org; award CNS-1330142). Any opinions, findings, and conclusions expressed in this material are those of the author and do not necessarily reflect the views of the NSF. For the full-length technical report, contact the author (email@example.com).
Boneh D, Demillo RA, Lipton RJ. 2001. On the importance of eliminating errors in cryptographic computations. Journal of Cryptology 14:101–119.
Clark SS, Ransford B, Rahmati A, Guineau S, Sorber J, Xu W, Fu K. 2013. WattsUpDoc: Power side channels to nonintrusively discover untargeted malware on embedded medical devices. Proceedings of the USENIX Workshop on Health Information Technologies, August 12, Washington.
FDA [US Food and Drug Administration]. 2014. Content of Premarket Submissions for Management of Cybersecurity in Medical Devices: Guidance for Industry and Food and Drug Administration Staff. Silver Spring, MD: Center for Devices and Radiological Health and Center for Biologics Evaluation and Research. Available at www.fda.gov/downloads/medicaldevices/ deviceregulationandguid ance/guidancedocuments/ucm356190. pdf.
FDA. 2015. Cybersecurity vulnerabilities of Hospira Symbiq infusion system. FDA Safety Communication, July 31. Silver Spring, MD. Available at www.fda.gov/MedicalDevices/Safety/AlertsandNotices/ ucm456815 .htm.
Fu K. 2011. Appendix D: Trustworthy medical device software. In: Public Health Effectiveness of the FDA 510(k) Clearance Process: Measuring Postmarket Performance and Other Select Topics: Workshop Report. Washington: National Academies Press.
Gollakota S, Hassanieh H, Ransford B, Katabi D, Fu K. 2011. They can hear your heartbeats: Non-invasive security for implanted medical devices. ACM SIGCOMM Computer Communication Review 41(4):2–13.
Guri M, Kachlon A, Hasson O, Kedma G, Mirsky Y, Elovici Y. 2015. SMem: Data exfiltration from air-gapped computers over GSM frequencies. Proceedings of the 24th USENIX Security Symposium, August 12–14, Washington. pp. 849–864.
Halperin D, Heydt-Benjamin TS, Ransford B, Clark SS, Defend B, Morgan W, Fu K, Kohno T, Maisel WH. 2008. Pacemakers and implantable cardiac defibrillators: Software radio attacks and zero-power defenses. Proceedings of the 29th Annual IEEE Symposium on Security and Privacy, May 18–22, Oakland. pp. 129–142.
Hanna S, Rolles R, Molina-Markham A, Poosankam P, Fu K, Song D. 2011. Take two software updates and see me in the morning: The case for software security evaluations of medical devices. Proceedings of 2nd USENIX Conference on Health Security and Privacy (Health-Sec), August. Berkeley: USENIX Association.
Kessler AM. 2015. Fiat Chrysler issues recall over hacking. New York Times, July 24. Available at www.nytimes.com/2015/07/25/business/fiat-chrysler-recalls- 1-4-million-vehicles-to-fix-hacking-issue.html.
Kim Y, Daly R, Kim J, Fallin C, Lee JH, Lee D, Wilkerson C, Lai K, Mutlu O. 2014. Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors. Proceeding of the 41st Annual International Symposium on Computer Architecture (ISCA ’14), June 14–18, Minneapolis. pp. 361–372. Washington: IEEE Press.
Kune DF, Backes J, Clark SS, Kramer DB, Reynolds MR, Fu K, Kim Y, Xu W. 2013. Ghost talk: Mitigating EMI signal injection attacks against analog sensors. Proceedings of the 34th Annual IEEE Symposium on Security and Privacy, May. Washington: IEEE Computer Society. pp. 145–159.
Leveson N, Turner C. 1993. An investigation of the Therac-25 accidents. IEEE Computer 26(7):18–41.
Pellegrini A, Bertacco V, Austin T. 2010. Fault-based attack of RSA authentication. Proceedings of the Conference on Design, Automation, and Test in Europe (DATE), March 8–12, Dresden. pp. 855–860.
Son Y, Shin H, Kim D, Park Y, Noh J, Choi K, Choi J, Kim Y. 2015. Rocking drones with intentional sound noise on gyroscopic sensors. Proceedings of the 24th USENIX Security Symposium, August 12–14, Washington. pp. 881–896.
1 The device has not been sold for several years, and the manufacturer established a rigorous training program for security engineering.
2 Project Zero: Exploiting the DRAM rowhammer bug to gain kernel privileges, March 9, 2015. Available at http://googleprojectzero.blogspot.com/2015/03/exploiting- dram-rowhammer-bug-to-gain.html.