In This Issue
Summer Bridge Issue on Aeronautics
June 26, 2020 Volume 50 Issue 2
The articles in this issue present the scope of progress and possibility in modern aviation. Challenges are being addressed through innovative developments that will support and enhance air travel in the decades to come.

Some Steps toward Autonomy in Aeronautics

Thursday, June 25, 2020

Author: John-Paul B. Clarke and Claire J. Tomlin

In aeronautics, the word “autonomy” engenders visions of a future in which aircraft are able and allowed to operate in civil airspace independent of human control or supervision—without pilots or ground-based operators/supervisors, interacting with air traffic controllers (which may themselves be machines) and other pilots just as if a human pilot were on board and in command (NRC 2014).

Background

Broadly speaking, an autonomous machine system has a degree of self-­government and self-directed behavior based on artificial intelligence (AI)-based capabilities that allow it to respond to situations that were not preprogrammed or anticipated in the design (i.e., decision-based responses) (Masiello 2013). All aircraft have systems with varying levels of ­automation—unless they fail or malfunction, these systems can and do operate without continuous human oversight. Increasingly autonomous systems could allow both crewed and uncrewed aircraft to operate for extended periods without the need for humans to monitor, supervise, and/or directly intervene in the operation of those systems in real or near-real time (NRC 2014).

Reducing and eventually eliminating the need for continuous human cognizance and control would enable reductions in crew complements and the associated direct operating cost of passenger and cargo aircraft, and enable both crewed and uncrewed aircraft to take on new roles not previously possible, practical, or cost-effective.

Overview of Autonomy in Aircraft Systems

The rich history of autonomy in air is punctuated by several key advances, typically developed and tested in military aircraft and then adopted in civilian aircraft (figure 1). The first steps in autonomy date back more than 100 years to levelers in the early biplanes that used hydraulically operated ailerons, elevators, and rudders.

Figure 1 

While civilian and military aviation have much in common historically with respect to autonomy, there has been some deviation in recent decades with the increased use of remotely piloted drones as platforms for surveillance and/or attack (it is important to note, however, that no automated system would ever “make the decision” to kill—there must be a human in the decision loop).

Today’s fast computation and communication, and advances in AI and machine learning, augur a new era of autonomy. However, understanding how to develop and deploy these tools for safety-critical systems is still in its infancy. Effective partnerships of AI and machine learning methods with pilots, other aircraft operators, and air traffic controllers are crucial for their safe deployment in aircraft. Just as the era of modern control was heavily influenced by aeronautics in the last century, aeronautic autonomy in this century could turn out to be the impetus for the design of safe learning and productive human-AI partnerships.

Research Needs

If aircraft are to be operated unattended, increasingly autonomous systems must be able to perform critical functions currently done by humans, such as “detect and avoid,” performance monitoring, subsystem ­anomaly and failure detection, and contingency decision making. This requires an understanding of how humans perform their roles in the present system and how these roles may be transferred to an increasingly autonomous system, particularly in high-risk situations, as well as a system architecture that supports intermittent human cognizance and control.

Further, because aircraft operate in uncertain environments and increasingly autonomous systems may have subsystems and components with whole or partially stochastic behavior, mathematical models are needed to describe

  • adaptive/nondeterministic processes as applied to humans and machines;
  • performance criteria, such as stability, robustness, and resilience, for the analysis and synthesis of adaptive/nondeterministic behaviors; and
  • methods beyond input-output testing for characterizing the behavior of increasingly autonomous ­systems.

Safety in Learning-Enabled Perception

One of the challenges in autonomous aviation is automated perception-based control—interpreting sensed information and providing an appropriate, safe control action. Current safety analysis tools enable autonomous systems to “reason” about safety given full information about the environment (Ames et al. 2017; Kousik et al. 2018; Mitchell et al. 2005). However, these tools do not scale well to scenarios in which the environment is being sensed in real time, such as during autonomous navigation tasks.

Moreover, perception itself is a challenge: the state of the art in object detection uses learning-based components, such as neural networks, yet such systems can produce solutions that are brittle to faults and cyberattack. Tools that analyze and quantify the correctness of such learning-based components and that provide a safe control action are needed. In the last two years, the need for such tools has been recognized with DARPA’s Assured Autonomy program[1] and NASA’s University Leadership Initiative in Assured Autonomy for Aviation Transformation (Balin 2016).

Neural Networks for Aircraft Perception

Consider a neural network in a future aircraft flight management system. Connected to a set of cameras on the aircraft, it takes sequences of images as its input, and its output is information critical for planning and control, such as location and type of other vehicles in the vicinity of the aircraft. Correct identification and localization are paramount for safety, as is prediction of the behavior of the other vehicles.

Recent progress has been made in assessing the correct­ness and robustness of decision analysis of neural networks through input-output verification and adversarial analysis, testing input data for typicality against a learned distribution of the training data, and designing a control policy to provide safe control action despite uncertainty in the vehicle’s environment.

Input-Output Verification

Input-output verification of neural networks has for the most part been applied to classification problems, such as image classification. It starts with the definition of an allowable input set (e.g., all small perturbations of a given image) and then seeks to determine whether there exists an allowable input to the neural network for which the output is incorrect—that is, whether the neural network makes a mistake in classification.

Figure 2 

This process can be seen in figure 2, in which an image of the written number “2” from the MNIST database of handwritten digits[2] is input to a network that attempts to correctly classify it as the digit “2.” The input-output verification problem asks if there exists an image within a distance of “e” in pixel space of this image (region B in figure 2), which is classified incorrectly. This problem is extremely challenging because of the high-dimensional input space as well as the large number of nodes and connections in the neural network (Katz et al. 2017).

There have been exciting recent developments in approximate solutions, though—ones that generate fast overapproximations of the set of possible neural network outputs and ask if such outputs would be misclassified (Bunel et al. 2018; Rubies-Royo et al. 2020; Singh et al. 2018; Wang et al. 2018; Zhang et al. 2018). Figure 2 illustrates the idea: if a quickly computed over­approximation of the neural network’s response to Bf(B) in figure 2—is among the outputs that are all classified correctly as “2,” then all images in the region B are classified correctly as “2.” Yet even with these advances, input-output verification is limited to small images.

New ways of thinking about the verification problem are needed—perhaps techniques that use image structure from cameras on the aircraft to reduce the system dimension.

The Need for Redundancy

Finally, despite the advances in learning-based perception, assured autonomy should continue to rely on the use of parallel and redundant sensor suites. For example, we have proposed a novel, real-time safety analysis method based on Hamilton-Jacobi reachability that provides strong safety guarantees despite environment uncertainty (Bajcsy et al. 2019). This safety method uses real-time measurements from a second sensor, which complements the vision sensor, to compute and update a safe operating envelope.

While promising, scaling such methods for use in the real-time environment of aviation is still a challenge, and techniques for fast approximation and safe contingency analysis will be crucial.

Human-Machine Teaming in Aeronautic Autonomy

Interactions between humans and machines are increasingly moving away from a “human use of machines” ­paradigm to the establishment of “relationships between humans and autonomous automated agents.” Optimal human-machine teaming (HMT), where the role of and relationship between humans and machines are optimized based on their individual and collective capabilities, is critical to achieving safe and efficient (economic and environmental) operations (Clarke 2019).

Enhanced Pilot Support

HMT is a critical enabler of single-pilot operations (SPO; also known as reduced-crew operations, RCO), an active area of study by NASA (Bilimoria et al. 2014). In commercial aviation, RCO is increasingly being considered as a viable solution to the rising costs associated with commercial air transport.

Recent advances in communications, navigation, surveillance/air traffic management, and avionics technologies have enabled higher levels of automation, which might be leveraged to mitigate the elevated workload of a single pilot. For example, pilots could be provided with higher levels of decision support for the execution of complex tasks, or with a highly autonomous automated agent that is akin to a copilot in terms of being a partner as opposed to merely executing preprogrammed functions.

Recent RCO research has focused on ground-based support, loss of the redundancy of a copilot, pilot health monitoring, aircraft systems monitoring, takeover and handback of control, recovery from an airside loss of control, pilot boredom, and certification (Schmid and Stanton 2019). Ultimately, in-flight mitigation strategies will have to be reconsidered and specifically tailored for RCO/SPO with respect to advanced automation tools and their reliability (Schmid and Stanton 2020). For example, a cognitive pilot-aircraft interface concept has been proposed with adaptive knowledge-based system functionalities to assist single pilots in the accomplishment of mission- and safety-critical tasks in commercial transport aircraft (Liu et al. 2016).

The Loyal Wingman Program

One very notable effort in military HMT is the loyal wingman program: a manned-unmanned teaming (MUM-T) concept in which a UAV flies in formation with a piloted vehicle, but may be separately tasked to break and rejoin formation as the mission dictates under the tactical command of the piloted lead (Humphreys et al. 2015). This is the latest evolution of MUM-T efforts in the US Air Force (James and Welch 2014; USAF Chief Scientist 2010; Winnefeld and Kendall 2011, 2013).

A significant portion of Air Force Research Laboratory research on autonomy for the loyal wingman has focused on solving optimal control problems in near-real time (Humphreys et al. 2015), using techniques such as direct orthogonal collocation (Humphreys et al. 2017). Internationally, there are similar loyal wingman-type efforts underway in Australia (Pittaway 2019), China (Mizokami 2019), and France (Fiorenza 2018).

Importance of Human Understanding of Autonomous Systems

The US Air Force is working to develop systems where humans have sufficient trust in intelligent machines that the latter are allowed to perform more of the tasks required to operate an aircraft (Overholt and Kearns 2014). Recent research results indicate that

  • the better humans understand the decision-making process of a machine, the more they trust it (Knight 2017; Lyons et al. 2016; Wang et al. 2015);
  • training can improve human-machine trust by helping human operators understand the limits of the capabilities of an autonomous machine (Lyons et al. 2016);
  • HMT is most effective when tasking is broken down hierarchically (Daniels 2017); and
  • emphasis should be on determining which input modalities are optimal for specific types of tasks or application environments, rather than spending resources to implement multiple modalities across all tasks (Zacharias 2019).

HMT and Urban Air Mobility

HMT is also critical in the burgeoning domain of urban air mobility (UAM). The economics of air taxi operations do not support the employment of human ­operators that are trained—and thus compensated—at the same rates as commercial pilots. Thus, there will be a greater role for autonomy than is currently the case in commercial or general aviation.

Furthermore, because rapid decision making and action are critical in flight vehicles that operate in close proximity to structures and people as well as in an environment with rapidly changing winds, passenger and cargo aircraft will have to be designed for both ­autonomous operations (i.e., without continuous human participation or supervision) and autonomous decision making (i.e., with the ability to determine what to do next in an unscripted situation without needing to consult a human). The same will be true for remotely operated aircraft with time delays between the operator and the aircraft (due to distance between them) and periodic loss of communication (due to line-of-sight blockage by buildings).

Analysis and Verification of Nondeterministic Systems

Aircraft operate in uncertain environments where physical disturbances, such as wind gusts, are probabilistic in nature. Further, distributed sensor systems—which ­supply the data that are transformed into the information on which pilots rely for decision making and ­control—have inherent noise with stochastic properties such as uncertain biases, random drifts over time, and varying environmental conditions.

Safety Challenges of Closed-Loop Adaptive Systems

As increasingly autonomous systems—which take advantage of evolving conditions and past ­experience to adapt their behavior (i.e., they are capable of ­learning)—take over more functions traditionally performed by humans for the sake of improved efficiencies, there will be a need to analyze, verify, validate, and certify their behavior a priori, and to incorporate autonomous monitoring and other safeguards to ensure continued appropriate operational behavior. Thus, there is tension between the benefits of incorporating software with adaptive/nondeterministic properties in increasingly autonomous systems and the requirement to test such software for safe and assured operation.

Research is needed to develop new methods and tools to address the inherent uncertainties in airspace system operations and thereby enable more complex adaptive/nondeterministic systems that can improve their performance over time and provide greater assurance of safety.

Guaranteeing the safety of closed-loop adaptive systems represents a formidable challenge. One need only look at personal experience to realize that learning does not necessarily proceed without hurdles (learning how to ride a bicycle, use skis, or windsurf typically entails accidents on the way to success).

From a fundamental dynamical system theory viewpoint, it is necessary to determine whether systems will (i) go “out of control” or (ii) function as desired. While failure to answer (i) is unacceptable from a verification perspective, answering (ii) and providing bounds, deterministic or probabilistic, on behaviors of adaptive systems can often be sufficient to verify the safety of models of a closed-loop adaptive system.

Run-Time Assurance

In the absence of generic knowledge about the safety of learning algorithms (there are a multitude of specific cases of adaptive systems whose behaviors are very well understood and safe; Boussios 1998), a partial ­remedy is to rely on run-time assurance (RTA) techniques (Sha 2001).

Run-time assurance is the practice of deferring the detection of dangerous behaviors until they happen and triggering corrective actions when they do. It has been used in multiple applications (e.g., emergency braking for elevators) and over time the definition of requirements for its correctness and power has improved (e.g., Adams 2019; Dershowitz 1998; Schouwenaars et al. 2005).

Recent operational implementations of RTA include the Automatic Ground Collision Avoidance System (Auto-GCAS), which automatically takes over pilot control in case of pending collision with the ground (Hobbs 2020). Little known is the Boeing 777 auto­pilot, whose behavior was considered sufficiently uncertain to be “bounded” by a better-known derivative of the Boeing 747 autopilot, ready to take over in case of erratic behavior (Hobbs 2020). In the future, RTA mechanisms similar to Auto-GCAS are likely to be developed to prevent collisions between spacecraft (Hobbs and Feron 2020).

One curious and attractive characteristic of RTA mechanisms is that they do not need the same kind of extensive certification as, say, an intelligent system in charge of the system’s operation. Indeed, the premise of run-time assurance is to constantly maintain an “option out” of behaviors recommended by primary intelligence. Verifying the validity of an “available option out” online is usually not complex.

The price of reduced verification requirements for run-time assurance is the need to mitigate false alarms (i.e., the RTA mechanism takes over regular operations when not needed). This problem may be mitigated by smarter RTA algorithms (Gurriet et al. 2018) or tolerated (to enable learning) if there are no significant short-term impacts on system performance.

Looking Ahead: Fully Autonomous Aerial Vehicles

There is immense interest in the use of fully autonomous aerial vehicles for commercial package delivery, surveillance, emergency supply delivery, videography, and search and rescue (Prevot et al. 2016), and of highly autonomous piloted and unpiloted air vehicles as “air taxis” for passengers in urban and regional environments. Projects such as Amazon Prime Air, Google ­Project Wing, Uber Elevate, and Airbus’s Vahana have been motivated in large part by technologies for automated control (Johnson and Kannan 2005), landing (Sharp et al. 2001), collision avoidance (Teo et al. 2005), and multiple vehicle cooperative operations (How et al. 2009) that were developed and tested over the past two decades in academic labs.

At the outset, these vehicles will fly well-defined routes in very predicable environments and will have automated all pilot functionality, including detect and avoid, navigation in GPS-denied environments, ­autonomous contingency management, and obstacle detection and clear landing zone identification. Ultimately, learning-enabled perception, human-machine teaming, and verification of nondeterministic systems will be critical to make crewed and uncrewed operation in urban environments sufficiently safe for passenger and cargo operations and to enable full integration in the National Airspace System.

Acknowledgments

The authors acknowledge Eric Allison, Andrea Bajcsy, Somil Bansal, Alex Dorgan, Eric Feron, Keoki Jackson, Forrest Laine, Liling Ren, and Vicenç Rubies-Royo for providing information and perspectives in the writing of this article.

References

Adams E. 2019. Cirrus’ private jet can now land itself, no pilot needed. Wired, Oct 31.

Ames AD, Xu X, Grizzle JW, Tabuada P. 2017. Control ­barrier function based quadratic programs for safety critical systems. IEEE Transactions on Automatic Control 62(8):3861–76.

Bajcsy A, Bansal S, Bronstein E, Tolani V, Tomlin CJ. 2019. An efficient reachability-based framework for provably safe autonomous navigation in unknown environments. arXiv 1905.00532.

Balin M. 2016. ARMD Strategic Thrust 6: Assured ­Autonomy for Aviation Transformation. NASA Aeronautics Vision and Roadmap Presentation, May 24. Online at https://www.nasa.gov/sites/default/files/atoms/files/armd- sip-thrust-6-508.pdf.  

Bilimoria KD, Johnson W, Schutte P. 2014. Conceptual framework for single pilot operations. Proceedings, ­Internatl Conf on Human-Computer Interaction in Aerospace, Jul 30–Aug 1, Santa Clara.

Boussios C. 1998. An approach to nonlinear controller design via approximate dynamic programming. PhD thesis, ­Massachusetts Institute of Technology.

Bunel RR, Turkaslan I, Torr PHS, Kohli P, Kumar MP. 2018. A unified view of piecewise linear neural network verification. Conf on Neural Information Processing Systems, Dec 3–8, Montreal.

Clarke JP. 2019. Towards optimal human-machine teaming​. Plenary talk, AIAA Intelligent Systems Workshop, Jul 29–30, Cincinnati. Online at http://hdl.handle.net/1853/62917.

Daniels M. 2017. Artificial intelligence: What questions should DoD be asking? Center for Global Security Research Lecture Series, Nov 20. Livermore: Lawrence Livermore National Laboratory.

Dershowitz AL. 1998. The effect of options on pilot decision making in the presence of risk. PhD thesis, Massachusetts Institute of Technology.

Fiorenza N. 2019. DGA commissions man machine teaming studies. Jane’s Defence Weekly, Nov 22.

Gurriet T, Mote M, Ames AD, Feron E. 2018. An online approach to active set invariance. Proceedings, IEEE Conf on Decision and Control, Dec 17–19, Miami Beach.

Hobbs K. 2020. Elicitation and formal specification of run-time assurance requirements for aerospace collision avoidance systems. PhD thesis, Georgia Institute of Technology.

Hobbs K, Feron E. 2020. A taxonomy for aerospace collision avoidance with implications for automation in space traffic management. AIAA Scitech Forum, Jan 6–10, Orlando.

How JP, Fraser C, Kulling KC, Bertuccelli LF, Toupet O, ­Brunet L, Bachrach A, Roy N. 2009. Increasing autonomy of UAVs: Decentralized CSAT mission management algorithm. IEEE Robotics and Automation 16(2):43–51.

Humphreys CJ, Cobb RG, Jacques DR, Reeger JA. 2015. Optimal mission paths for the uninhabited loyal wingman. 16th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conf, Jun 22–26, Dallas.

Humphreys CJ, Cobb RG, Jacques DR, Reeger JA. 2017. Dynamic re-plan of the loyal wingman optimal control problem. AIAA Guidance, Navigation, and Control Conf, Jan 9–13, Grapevine TX.

James DL, Welsh MA III. 2014. United States Air Force RPA Vector: Vision and Enabling Concepts 2013-2038. ­Washington: United States Air Force.

Johnson E, Kannan S. 2005. Adaptive trajectory control for autonomous helicopters. Journal of Guidance, Control, and Dynamics 28(3):524–38.

Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ. 2017. Reluplex: An efficient SMT solver for verifying deep ­neural networks. arXiv 1702.01135.

Knight W. 2017. The dark secret at the heart of AI. MIT Technology Review, Apr 11.

Kousik S, Vaskov S, Bu F, Roberson MJ, Vasudevan R. 2018. Bridging the gap between safety and real-time performance in receding-horizon trajectory design for mobile robots. arXiv 1809.06746.

Liu J, Gardi A, Ramasamy S, Lim Y. 2016. Cognitive pilot-­aircraft interface for single-pilot operations. Knowledge-Based Systems 112:37–53.

Lyons JB, Koltai KS, Ho NT, Johnson WB, Smith DE, Shively RJ. 2016. Engineering trust in complex automated systems. Ergonomics in Design 24(1):13–17.

Masiello TJ. 2013. Air Force Research Laboratory Autonomy Science and Technology Strategy (88AWB-2013-5023). Washington: US Air Force.

Mitchell I, Bayen A, Tomlin C. 2005. A time-dependent Hamilton-Jacobi formulation of reachable sets for continuous dynamic games. IEEE Transactions on Automatic Control 50(7):947–57.

Mizokami K. 2019. China’s loyal wingman drone flies alongside manned fighters. Popular Mechanics, Aug 31.

NRC [National Research Council]. 2014. Autonomy Research for Civil Aviation: Toward a New Era of Flight. Washington: National Academies Press.

Overholt J, Kearns K. 2014. Transcript, Air Force Research Laboratory Autonomy Science & Technology Strategy Presentation, Apr 8. Online at https://defenseinnovationmarketplace.dtic.mil/wp-content/ uploads/2018/02/AFRL_AutonomyPart2.pdf.

Pittaway N. 2019. Boeing unveils loyal wingman drone. Defense News, Feb 27.

Prevot T, Rios J, Kopardekar P, Robinson JE III, Johnson M, Jung J. 2016. UAS traffic management (UTM) concept of operations to safely enable low altitude flight operations. AIAA Aviation Technology, Integration, and Operations Conf, Jun 13–17, Washington.

Rubies-Royo V, Calandra R, Stipanovic DM, Tomlin C. 2020. Fast neural network verification via shadow prices. arXiv 1902.07247.

Schmid D, Stanton NA. 2019. Progressing toward airliners’ reduced-crew operations: A systematic literature review. International Journal of Aerospace Psychology 30(1-2).

Schmid D, Stanton NA. 2020. Considering single-piloted airliners for different flight durations: An issue of fatigue management. Advances in Intelligent Systems and Computing 964:683–94.

Schouwenaars T, Valenti M, Feron E, How J. 2005. Implementation and flight test results of MILP-based UAV ­guidance. IEEE Aerospace Conf, Mar 5–12, Big Sky MT.

Sha L. 2001. Using simplicity to control complexity. IEEE Software 18(4):20–28.

Sharp C, Shakernia O, Sastry SS. 2001. A vision system for landing an unmanned aerial vehicle. IEEE Internatl Conf on Robotics and Automation, May 21–26, Seoul.

Singh G, Gehr T, Mirman M, Püschel M, Vechev M. 2018. Fast and effective robustness certification. Conf on Neural Information Processing Systems, Dec 3–8, Montreal.

Teo R, Jang JS, Tomlin CJ. 2005. Automated multiple UAV flight: The Stanford Dragonfly UAV program. Proceedings, IEEE Conf on Decision and Control 4:4268–73.

USAF Chief Scientist. 2010. Report on Technology Horizons: A Vision for Air Force Science & Technology ­During 2010-2030. Technical Report AF/ST-TR-10-01-PR. ­Washington: US Air Force.

Wang N, Pynadath DV, Hill SG. 2015. Building trust in a human-robot team with automatically generated explanations. Interservice/Industry Training, Simulation and Education Conf, Nov 30–Dec 4, Orlando. 

Wang S, Pei K, Whitehouse J, Yang J, Jana S. 2018. Efficient formal safety analysis of neural networks. Conf on Neural Information Processing Systems, Dec 3–8, Montreal.

Winnefeld JA Jr, Kendall F. 2011. Unmanned Systems Integrated Roadmap FY2011-2036. Washington: US Department of Defense.

Winnefeld JA Jr, Kendall F. 2013. Unmanned Systems Integrated Roadmap FY2013-2038. Washington: US Department of Defense.

Zacharias GL. 201 9. Autonomous Horizons: The Way ­Forward. Maxwell AFB AL: Air University Press.

Zhang H, Weng TW, Chen PY, Hsieh J, Daniel L. 2018. Efficient neural network robustness certification with general activation functions. Conf on Neural Information Processing Systems, Dec 3–8, Montreal.

Zhu S, Chen B, Chen Z, Yang P. 2019. Asymptotically optimal one- and two-sample testing with kernels. arXiv 1908.10037.
 

[1]  https://www.darpa.mil/program/assured-autonomy

[2]  http://yann.lecun.com/exdb/mnist/

About the Author:J-P. Clarke is Dean’s Professor in the Daniel Guggenheim School of Aerospace Engineering and the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology. Claire Tomlin (NAE) holds the Charles A. Desoer Chair in Engineering as a professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley.