In This Issue
Fall Issue of The Bridge on Cybersecurity
September 19, 2019 Volume 49 Issue 3
This issue features selected papers intended to provide a basis for understanding the evolving nature of cyber-security threats, for learning from past incidents and best practices, and for anticipating the engineering challenges in an increasingly connected world.

Cybersecurity: Revisiting the Definition of Insider Threat

Thursday, September 19, 2019

Author: Nicole Lang Beebe and Frederick R. Chang

The insider threat problem is older than the cybersecurity problem itself and has similarly proven to be exceedingly resilient to solution. Organizations work hard to establish adequate defenses to combat external cyber risk, but the insider threat may actually be a greater concern.

Redefining Insider Threat

As technological advances provide better tools to detect and prevent -insider threat attacks, they also introduce new threats. They not only make it -easier for adversaries to engage trusted human actors in a network but also introduce new, nonhuman trusted agents, such as mobile devices, internet--connected devices, and artificial intelligence (AI). Indeed, the definition and treatment of insider threat need to be expanded to include unwitting human actors and technology that act as trusted agents within networks.

Types of Human Insiders

Human insiders are individuals with legitimate access to an organization’s computers and networks and whose volitional actions put organizational data, processes, or resources at risk in an unwelcome or disruptive way, whether their intent is malicious or nonmalicious (e.g., policy violations motivated for organizational good without regard to unintended consequences that may introduce security risks) (Pfleeger et al. 2010). In recent years, the definition has expanded to include the unwitting insider (Costa et al. 2014; Guo et al. 2011; -Maasberg et al. 2015; Willison and Warkentin 2013).

There are three broadly characterized types of human insider. Malicious insiders knowingly and willingly seek to harm their organization through espionage, theft of intellectual property (IP), fraud, or sabotage (Moore et al. 2013). In contrast, nonmalicious insiders may knowingly violate organizational policy but believe they are doing so for the greater good of their organization. They may, for example, circumvent security policies to get their job done more efficiently.

Insiders in the third class are often referred to as unwitting insiders. They neither intend their organization harm, nor even know they are doing anything wrong. Such insiders are dangerous to organizations (Greitzer et al. 2014), as they are susceptible to social engineering attacks (e.g., phishing), nefarious websites, or malware (Verizon 2019).

The Role of Technology

Malicious insider attacks appear to be on the rise, arguably due to, or at least enabled by, technological advancement. Exfiltrating information from an organization no longer requires surreptitiously photocopying large amounts of documents in small increments or secretly removing hard-copy data from an organization. Technological advances such as removable thumb drives, email, and cloud storage facilitate espionage and IP theft. The electronic ledger enables fraud and theft without stealing a physical dollar. The information technology (IT) infrastructure critical to organizations is a target for cybersabotage.

In criminal justice terms, technological advancement has increased the motive, opportunity, and means for the malicious insider. Insiders are (1) increasingly motivated by information-based targets, (2) equipped to commit such crimes through a robust market of point-and-click exploitation suites and ready-to-deploy malware, and (3) able to easily identify opportunities for information acquisition and financial gain in vulnerable internet-connected systems.

Technology has also resulted in an increase in external adversary use of unwitting insiders to gain a digital foothold in an organization: the adversary can then pivot, move laterally, and escalate privilege levels to obtain access and control over targeted digital asset(s). The unwitting insider has become a critical component in the system and process used by external hackers the world over. Until zero trust network model design principles (Kindervag 2010) become the norm, -external hackers will continue to be enabled by the unwitting insider, by virtue of the operational trust given to the authorized user account. In fact, the most difficult insider threat to defend against is the unwitting insider (Verizon 2019).

Technology Itself as Insider Threat

As discussed, technology is both a target and an enabler (its role as a defender, detecting insider activity through data loss prevention appliances and security information event management devices, is assumed). Looking ahead, however, it is poised to become a -perpetrator—the next insider. This prediction is based not on machine-initiated malfeasance in the evolution of a far-flung view of sentient AI, but rather on technology as a trusted insider in a complex system: a machine is given access and its outputs are trusted inputs to other machines (and humans) in a larger system.

Trusted Machines and Systems

The prevailing paradigm to trust the machine remains. In industrial manufacturing environments, for example, system and manufacturing process engineers design cyberphysical systems whose mechanical devices and machinery are automated by human-designed -computer-based algorithms (commonly referred to as -Industry 4.0). Unfortunately, these systems are designed for reliability without sufficient regard to cybersecurity (Thames and Schaefer 2017). In effect, the computer-based algorithm becomes a trusted agent within the system, as do the robotic manufacturing devices carrying out the computer-based directives. Both the software and the hardware are treated as trusted agents within the system.

In critical infrastructure contexts, research has illuminated an array of security vulnerabilities introduced by interconnecting operational technology (OT) and IT networks (Lun et al. 2016; Murray et al. 2017). These challenges are becoming evident as 50 billion Internet of Things (IoT) devices (Afshar et al. 2017) are increasingly interconnected—to each other, to manufacturing systems, to traditional networks, and beyond. The problem is only going to worsen when fifth generation (5G) networks come online and machine-to-machine interconnectivity promised by Industry 4.0 becomes a reality.

Last, the added layer of evolving AI in next-generation manufacturing environments (Industry 5.0), with its envisioned human-machine symbiosis, means that AI will become the next trusted agent in the system. But research on adversarial AI techniques (Carlini and Wagner 2017; Dalvi et al. 2004; Lowd and Meek 2005) and on efforts to detect and counter them (Tramèr et al. 2017; Yuan et al. 2019) makes it clear that AI cannot necessarily be trusted. Yet AI is still widely trusted.

In sum, a system-based view of the insider threat necessitates an evolved perspective of “who” the insider is. A continued human-centric approach that focuses solely on malicious actors is myopic and dangerous. The insider is the trusted actor on a network, whether that actor is human, an embedded device, the software, the network, or the AI, and its risk should be considered regardless of whether the action is volitional or nonvolitional and whether the motive is malicious or nonmalicious.

The Need for a Systems-Level Approach

The complexity and interdependency of the modern organization necessitates a systems-level view of security that is integrated into the full systems design from the beginning. Security practitioners have long argued that security must be “baked in,” not “tacked on” at the end, but unfortunately this philosophy has largely focused on device-level design without taking into account a systems engineering perspective. The security of systems of systems and across interconnected systems remains underaddressed.

Furthermore, the conceptualization of the insider has remained focused on the vulnerable but trusted human. Not only is it inadvisable to trust all insiders to act without malicious intent, it is essential to increase vigilance against the unwitting insider.

In advocating for an expanded view of the insider threat to include the technological insider, we are informed by important earlier work that has described similar concepts—including some of the pioneering computer security work from the 1970s that proposed the notion of subverted software or hardware as a serious security threat to computing systems of the time (Anderson 1972; Lampson 1973; Schell 1979). The problem was critically important then—and it still is. Indeed, the case can be made that in today’s advanced technological landscape, greater awareness of and attention to the problem are long overdue.

Mechanisms of Technological Insider Threats

The US National Insider Threat Policy (DNI 2012), written in response to Executive Order 13587, “Structural Reforms to Improve the Security of Classified -Networks and the Responsible Sharing and Safeguarding of Classi-fied Information” (Obama 2011), sets expectations and identifies best practices for deterring, detecting, and mitigating insider threats. However, it focuses entirely on the human insider and turns to technology only to deter and detect the human insider threat. It calls for program establishment, training of program personnel, monitoring of user and network activity, and employee training and awareness. The more recent Insider Threat Guide (NITTF 2017) and maturity framework (Belk and Hix 2018) from the National Insider Threat Task Force (NITTF) continue this trend. Yet the risk posed by the technological insider is clear and present.

Personal Devices Used for Work

According to a 2018 study of 500 IT executives, CEOs, and other senior managers in the United States, commissioned by and conducted in partnership with -Samsung, nearly 80 percent of employees cannot accomplish their required work effectively without a mobile device (Oxford Economics 2018). The vast majority of these devices are personally owned and managed, and 61 percent of the respondents expected their employees to be available remotely. According to the study, companies save approximately 15 percent with a bring your own device (BYOD) policy over an enterprise-owned device policy, so many companies opt for BYOD.

Poor Security Practices

With so many organizations dependent on personally owned mobile devices as part of their workflow, user smartphone security awareness and implementation are critical. Unfortunately, however, many users continue to demonstrate poor security awareness and hygiene on their mobile devices (Parker et al. 2015), putting their organizations at risk as smartphones become part of an organizational network infrastructure through email access, file activity, workflow processing, and much more. Mobile device users incur risks by

  • using poor password policies;
  • failing to use device and screen locks (and when they do, often employing weak authentication mechanisms), virus protection, encryption, remote device locator services, and/or remote wiping services; and
  • visiting unsafe websites and/or installing risky software (Mylonas et al. 2013; Parker et al. 2015; -Sebescen and Vitak 2017).

Defenses

To combat poor user security on BYOD devices, companies are deploying mobile device management (MDM) software that enables software network access control, identification of outdated or nonexistent virus detection software, enforcement of passcode requirements, detection of “jailbroken” phones, remote location and wiping, application-level security through VPN tunnels, white-/blacklisting, and dynamic policy enforcement. Other organizations are opting for a containerization approach, wherein company data, communications, and work-related applications are stored in an encrypted partition/area of the device.

This all sounds promising, except that 70 percent of small businesses fail to implement MDM or container-ization (Parizo 2018). Without such protections, -millions—and potentially billions—of smartphones are connected to organizational networks as trusted -insiders, gaining access to sensitive data, propagating malware, and giving adversaries footholds in their networks.

Embedded Devices

Embedded devices affect enterprises through supply chain security risks and through interconnectivity between IoT devices and enterprise networks.

Supply Chain Vulnerabilities

Supply chain vulnerabilities jeopardize enterprise networks when individual computing components in systems are compromised (e.g., through integrated circuit chips from untrusted sources). The resulting challenges to the security of the computational system may affect the authenticity of materials and components, physical tracking and antitampering efforts, secure communications, and the installation of backdoors, among others.

How trustworthy are the computing components vital to the vast array of computational devices used daily in work and personal life? This is a significant and growing concern, as evidenced by several major research funding programs by US federal agencies. In 2007 the Defense Advanced Research Projects Agency committed nearly $25 million to the Trust in -Integrated Circuits Program (Adee 2007). In 2011 it invested another $49 million in the Integrity and Reliability of Integrated Circuits Program (Rawnsley 2011). The Intelligence Advanced Research Project Agency followed suit with a Trusted Integrated Chips Program.[1] The National Institute of Standards and Technology has issued comprehensive guidance on cyber supply chain risk management (-Boyens et al. 2015), and supply chain security is a major thrust of the Department of Energy’s latest $70 million funding announcement for a cyber-security institute for energy efficient manufacturing (DOE 2019).

Interconnectivity Risks

Embedded devices are increasingly connected in every area of life, especially with the rapid adoption of IoT devices and their interconnectivity with enterprise networks. This interconnectivity may be direct (enterprise IoT devices as part of the network) or indirect (-personal IoT devices connected to personal smartphones, which are then connected to enterprise networks). Gartner (2016) predicts that by 2020 over 25 percent of identified security attacks in enterprises will involve IoT devices, a prediction that seems to be borne out by the exponential increase—nearly 3700 percent—in IoT malware: Kaspersky Lab (2018) saw 3,219 pieces of IoT malware in 2016, 32,614 in 2017, and 121,588 in the first half of 2018. It appears clarion calls about the -problem were well founded (e.g., Schneier 2014).

The Mirai botnet, for example, leveraged IoT devices such as webcams, DVRs, and routers to launch a distributed denial of service attack on internet service provider Dyn, taking down a significant portion of the internet on October 21, 2016, and preventing millions of people from accessing over 1,200 websites, including Twitter and Netflix, for nearly an entire day (Kolias et al. 2017). Home and building automation systems have been shown to have significant vulnerabilities in critical systems such as those for HVAC; fire detection, suppression, and alerts; security and access control, such as cameras/CCTVs and door locks; and lighting (Peacock and Johnstone 2014).

Network Devices

The problem of the personal device insider and the embedded device insider is about to get worse with the rollout of 5G, which will enable greater device-to-device connectivity, interconnectivity between physical and virtual devices, and greater/faster bandwidth. 5G’s enhanced encoding, air interface, channel frequencies, and antenna technologies mean that cellular -devices will be able to do even more and IoT devices will be able to interconnect on a much grander scale. An individual’s smartphone will be connected to his home automation system, vehicle, peers when -gaming—and BYOD work environment (and everything in between).

Software (AI/Autonomy)

The last technological insider is associated with AI and autonomous systems. The scenario has been fictionally illustrated on the big screen and in novels for quite some time, but the threat is real.

Most existing machine learning (ML) classifiers are not particularly robust or immune to adversarial attacks (also known as adversarial learning or adversarial AI) (Kurakin et al. 2018). This was well known and of sufficient concern for Google Brain to organize a competition at the 2017 Conference on Neural Information Processing Systems to generate new adversarial attack samples and develop defenses to counter them.[2] Research into detecting and defending against adversarial AI attacks, however, remains limited, in both quantity and success (Athalye et al. 2018; Kurakin et al. 2018).

In addition to adversarial input attacks, data poisoning (Biggio et al. 2012) and model stealing attacks (Tramèr et al. 2017) against ML systems must be addressed. Organizations that use AI, even with human-in-the-loop designs, must carefully consider the trust models associated with AI and autonomous systems and vigilantly monitor for nefarious concept drift (systems slowly moving outside of reasonable parameters).

The risk of AI or autonomous systems becoming a witting insider may be many years off, but the very near reality is that they can be manipulated -nefariously, much like the nonmalicious/nonvolitional (i.e., -unwitting) human insider. The quest for reliable adversarial AI detectors and defense systems must accelerate, because the widespread deployment of AI, ML-based, and autonomous systems is already underway.

Addressing the Challenge through Systems Engineering

The complexity of the insider threat problem—three classes of human insiders as well as the technological insiders discussed here—necessitates a true systems-engineered solution.

When systems are designed with a systems engineering perspective, their constituent components and interconnectivity are fully and intentionally integrated synergistically to optimally perform a collective function. Security may be viewed as both a subsystem and a system design characteristic. However, all too often it is treated only as a design characteristic at the subsystem level rather than being considered in the design across the entire system of systems.

The relatively new field of security engineering has attempted to remedy this, but still falls short. Security engineering typically takes a multilevel approach that considers software security, information security, physical security, and technical surveillance separately rather than in an integrated system design. Integrated security engineering from a systems engineering perspective should consider information/data flows, subsystem interconnectivity, human-to-system and system-to-system access controls, and privilege design and protection.

Information and data flow security has largely been researched within, not between, systems. Examples include system call monitoring (Hofmeyr et al. 1998) and anomaly detection (Bhatkar et al. 2006), as well as the more recent concept of data flow assertions (Yip et al. 2009) for application layer security, wherein an explicitly identified data flow plan is checked for compliance at runtime. There has been little attention to the need to integrate security and systems engineering throughout the system development process (Mouratidis et al. 2003).

Most integrated system development security has been geared toward identifying and mitigating security risks associated with OT-IT interconnectivity or between legacy and modern systems. But the problem goes much deeper. As attackers gain a foothold in an inconsequential system and then move to more consequential targets through pivoting and lateral movement, escalating privileges along the way, they are able to obtain access disturbingly easily wherever the target may be within a network.

Consider the design of an advanced manufacturing system, for example. Manufacturing subsystems are interconnected through secure versions of machine-to-machine transport protocols (e.g., MTConnect and MQTT) or secure 5G wireless interconnectivity, secured with software, trusted hardware components, and intrusion detection systems. In addition:

  • Interdevice trust and dataflow are secured through trusted protocols, access management, and security analytics.
  • Predictive analytics based on the application of machine learning and artificial intelligence are routinely calibrated and verified, not blindly trusted.
  • Digital twin and digital thread technologies are employed for cyberphysical vulnerability detection.
  • Proactive mitigations, response, and resiliency are performance requirements that drive functional requirements such as isolation, containerization, moving target defense, advanced sensing, anomaly detection, self-healing, forensic data retention, rapid information sharing, and full system visualization.

Conclusion

A system is only as strong as its weakest link, and enterprises are connecting more and more weak links to their network through personal, embedded, network, and autonomous devices. Progress in addressing the insider threat has been made since the establishment of the NITTF in 2011, but unfortunately, the focus has been on the malicious human insider. The scope must be broadened to include the unwitting insider and the technological insider as well.

The wave of 5G interconnected IoT devices, interconnectivity between home and work through the notoriously undersecured smartphone in an increasingly BYOD enterprise, and the development and spread of AI-enabled software are all making it increasingly difficult to defend digital networks. These trends, along with the growing interconnectedness of critical infrastructures, are cause for great concern.

Acknowledgments

Special thanks to Steve Lipner for his constructive comments that materially improved the perspectives and contributions of this article. Additional gratitude to Cameron Fletcher and Jenni Simonsen for their careful editing to improve the readability of the ideas and thoughts conveyed herein.

References

Adee S. 2007. Contracts awarded for DARPA’s Trust in Integrated Circuits Program. IEEE Spectrum: Technology, Engineering, and Science News, Dec 6.

Afshar V. 2017. Cisco: Enterprises are leading the Internet of Things innovation. HuffPost, Aug 28.

Anderson JP. 1972. Computer Security Technology Planning Study, Vol I. Report ESD-TR-73-51. AFSC Electronic Systems Division, Hanscom AFB, Bedford MA.

Athalye A, Carlini N, Wagner D. 2018. Obfuscated -gradients give a false sense of security: Circumventing defenses to adversarial examples. Proceedings, 35th International Conf on Machine Learning, Jul 10–15, Stockholm.

Belk RW, Hix TD. 2018. Insider Threat Program: Maturity Framework. McLean VA: National Insider Threat Task Force.

Bhatkar S, Chaturvedi A, Sekar R. 2006. Dataflow anomaly detection. 2006 IEEE Symposium on Security and Privacy, May 21–24, Berkeley/Oakland.

Biggio B, Nelson B, Laskov P. 2012. Poisoning attacks against support vector machines. ArXiv:1206.6389.

Boyens J, Paulsen C, Moorthy R, Bartol N. 2015. Supply Chain Risk Management Practices for Federal Information Systems and Organizations. NIST Special Publication 800-161. Gaithersburg: National Institute of Standards and Technology.

Carlini N, Wagner D. 2017. Adversarial examples are not easily detected: Bypassing ten detection methods. Proceedings, 10th ACM Workshop on Artificial Intelligence and Security, Nov 3, Dallas.

Costa DL, Collins ML, Perl SJ, Albrethsen MJ, Silowash GJ, Spooner DL. 2014. An ontology for insider threat indicators: Development and applications. Online at https://resources.sei.cmu.edu/asset_files/ConferencePaper/
2014_021_001_426817.pdf.

Dalvi N, Domingos P, Sanghai S, Verma D. 2004. Adversarial classification. Proceedings, 10th ACM SIGKDD International Conf on Knowledge Discovery and Data Mining, Aug 22–25, Seattle.

DNI [Director of National Intelligence]. 2012. National Insider Threat Policy. Online at https://www.dni.gov/files/NCSC/documents/nittf/National_ Insider_Threat_Policy.pdf.

DOE [US Department of Energy]. 2019. DOE announces $70 million for Cybersecurity Institute for Energy Efficient Manufacturing. Energy.Gov, Mar 26.

Gartner. 2016. Gartner says worldwide IoT security spending to reach $348 million in 2016. Apr 25. Stamford CT.

Greitzer FL, Strozer JR, Cohen S, Moore AP, Mundie D, -Cowley J. 2014. Analysis of unintentional insider threats deriving from social engineering exploits. 2014 IEEE Security and Privacy Workshops, May 17–18, San Jose.

Guo KH, Yuan Y, Archer NP, Connelly CE. 2011. Understanding nonmalicious security violations in the workplace: A composite behavior model. Journal of Management Information Systems 28(2):203–236.

Hofmeyr SA, Forrest S, Somayaji A. 1998. Intrusion detection using sequences of system calls. Journal of Computer Security 6(3):151–180.

Kaspersky Lab. 2018. New IoT-malware grew three-fold in H1 2018. Press release, Sep 18.

Kindervag J, with Balaouras S, Coit L. 2010. No more chewy centers: Introducing the zero trust model of information security. Cambridge MA: Forrester Research.

Kolias C, Kambourakis G, Stavrou A, Voas J. 2017. DDoS in the IoT: Mirai and other botnets. Computer 50(7):80–84.

Kurakin A, Goodfellow I, Bengio S, Dong Y, Liao F, Liang M, Pang T, Zhu J, Hu X, Xie C, and 13 others. 2018. Adversarial attacks and defences competition. The NIPS ’17 Competition: Building Intelligent Systems, eds Escalera S, Weimer M. Cham, Switzerland: Springer International Publishing.

Lampson B. 1973. A note on the confinement problem. -Communications of the ACM 16(10):613–615.

Lowd D, Meek C. 2005. Adversarial learning. Proceedings, 11th ACM SIGKDD International Conf on Knowledge Discovery in Data Mining, Aug 21–24, Chicago.

Lun YZ, D’Innocenzo A, Malavolta I, Di Benedetto MD. 2016. Cyber-physical systems security: A systematic mapping study. ArXiv:1605.09641.

Maasberg M, Warren J, Beebe NL. 2015. The dark side of the insider: Detecting the insider threat through examination of dark triad personality traits. 48th Hawaii International Conf on System Sciences, Jan 5–8, Kauai.

Moore AP, McIntire D, Mundie D, Zubrow D. 2013. Justification of a pattern for detecting intellectual property theft by departing insiders. Technical note CMU/SEI-2013-TN-013. Pittsburgh: Carnegie Mellon University Software Engineering Institute.

Mouratidis H, Giorgini P, Manson G. 2003. Integrating security and systems engineering: Towards the modelling of secure information systems. In: Active Flow and Combustion Control 2018 (Vol 141), ed King R. Cham: Springer International Publishing.

Murray G, Johnstone MN, Valli C. 2017. The convergence of IT and OT in critical infrastructure. -Proceedings, -Australian Information Security Management Conf, Dec 5–6, Perth.

Mylonas A, Kastania A, Gritzalis D. 2013. Delegate the smartphone user? Security awareness in smartphone platforms. Computers and Security 34:47–66.

NITTF [National Insider Threat Task Force]. 2017. Insider Threat Guide: A Compendium of Best Practices to Accompany the National Insider Threat Minimum Standards. McLean VA.

Obama B. 2011. Structural Reforms to Improve the Security of Classified Networks and the Responsible Sharing and Safeguarding of Classified Information (Executive Order 13587). Federal Register 76(198). Online at https://www.dni.gov/files/NCSC/documents/nittf/EO_13587. pdf.

Oxford Economics. 2018. Maximizing Mobile Value: Is BYOD Holding You Back? Oxford.

Parizo C. 2018. What is MDM? Does your small business need it? Samsung Business Insights, Oct 23.

Parker F, Ophoff J, Belle JV, Karia R. 2015. Security awareness and adoption of security controls by smartphone users. 2nd International Conf on Information Security and Cyber Forensics, Nov 15–17, Cape Town.

Peacock M, Johnstone MN. 2014. An analysis of security issues in building automation systems. Proceedings, 12th Australian Information Security Management Conf, Dec 1–3, Perth.

Pfleeger SL, Predd JB, Hunker J, Bulford C. 2010. Insiders behaving badly: Addressing bad actors and their actions. IEEE Transactions on Information Forensics and Security 5(1):169–179.

Rawnsley A. 2011. Can DARPA fix the cybersecurity “-problem from hell”? Wired, Aug 5.

Schell RR. 1979. Computer security: The Achilles’ heel of the electronic Air Force? Naval Postgraduate School, -Monterey CA.

Schneier B. 2014. The Internet of Things is wildly insecure—and often unpatchable. Wired, Jan 6.

Sebescen N, Vitak J. 2017. Securing the human: Employee security vulnerability risk in organizational settings. -Journal of the Association for Information Science and Technology 68(9):2237–2247.

Thames L, Schaefer D, eds. 2017. Cybersecurity for -Industry 4.0: Analysis for Design and Manufacturing. Cham: Springer.

Tramèr F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P. 2017. Ensemble adversarial training: Attacks and defenses. ArXiv:1705.07204.

Verizon. 2019. Insider Threat Report. Online at https://-enterprise.verizon.com/resources/reports/insider- threat-report/.

Willison R, Warkentin M. 2013. Beyond deterrence: An expanded view of employee computer abuse. MIS -Quarterly 37(1):1–20.

Yip A, Wang X, Zeldovich N, Kaashoek MF. 2009. Improving application security with data flow assertions. Proceedings, ACM SIGOPS 22nd Symposium on Operating Systems Principles, Oct 11–14, Big Sky MN.

Yuan X, He P, Zhu Q, Li X. 2019. Adversarial examples: Attacks and defenses for deep learning. IEEE Transactions on Neural Networks and Learning Systems.

 


[1]  https://www.iarpa.gov/index.php/research-programs/tic

 

[2]  https://kaggle.com/c/nips-2017-non-targeted-adversarial- attack

 

About the Author:Nicole Beebe is professor and chair, Department of Information Systems & Cyber Security, Melvin Lachman Distinguished Professor, and director of the Cyber Center for Security & Analytics at the University of Texas at San Antonio. Frederick Chang (NAE) is professor and chair, Department of Computer Science, Bobby B. Lyle Centennial Distinguished Chair in Cyber Security, and executive director of the Darwin Institute for Cyber Security at Southern Methodist University.