In This Issue
The Bridge: 50th Anniversary Issue
January 7, 2021 Volume 50 Issue S
This special issue celebrates the 50th year of publication of the NAE’s flagship quarterly with 50 essays looking forward to the next 50 years of innovation in engineering. How will engineering contribute in areas as diverse as space travel, fashion, lasers, solar energy, peace, vaccine development, and equity? The diverse authors and topics give readers much to think about! We are posting selected articles each week to give readers time to savor the array of thoughtful and thought-provoking essays in this very special issue. Check the website every Monday!

Understanding Uncertainty, Context, and Human Cognition: Necessary Conditions for Safe Autonomy

Wednesday, December 23, 2020

Author: Srikanth Saripalli and James E. Hubbard Jr.

Autonomous vehicles are still “baffled” by unpredictable human actions such as wrong-way driving, emergency vehicles, and human-guided traffic diversions. Yet this unpredictability gives people an edge in unknown or dangerous situations. Therefore remote supervision by a human and safety intervention by a remote human driver are necessary for such vehicles to be deployed safely and effectively in real-world situations. Developing human supervision for autonomous vehicles will provide the necessary safeguards to deploy these vehicles quickly and effectively.

Self-Driving Shuttles at TAMU

At Texas A&M University (TAMU) we are developing human supervision for autonomous vehicles in a state-of-the-art teleoperation center. In our efforts to develop self-driving shuttles in the city of Bryan, TAMU is deploying and testing autonomous shuttles that do not have a safety driver behind the wheel (but always have a safety navigator in the front passenger seat). We have outfitted our self-driving shuttles with a teleoperation system.

Our proof-of-concept project includes the integration of teleoperation hardware and software in connected self-driving shuttles. We are specifically interested in (i) quantifying when the human teleoperator takes over the vehicles, (ii) developing higher-level actions for the teleoperator to interact with the vehicles, and (iii) quantifying the behavior of such high-level actions.

Risks of Partial Autonomy

As researchers and manufacturers move rapidly toward achieving full autonomy, human operators and passengers become increasingly disconnected. To the extent that an autonomous vehicle’s situational awareness increases, that of the human operator or passenger decreases, meaning they are less likely to efficiently take over manual control when needed if an anomaly occurs.

This trade-off has been called the automation ­conundrum; because the goal of full system autonomy is quite difficult, most systems will exist at some level of semi­autonomy for the foreseeable future (Endsley 2017). The automation conundrum may be a fundamental barrier to full autonomy in safety-critical systems such as driving.

Humans fare poorly with increased automation:
it results in automation complacency and overreliance,
loss of situational awareness, and skill loss.

While recent system autonomy efforts are beginning to leverage artificial intelligence and learning algorithms to allow the platforms to better adapt to ­unanticipated and changing situations, it is clear that the design and inclusion of human autonomy interfaces are needed.

There has been much research demonstrating that humans fare poorly with increased automation: it results in automation complacency, overreliance on ­automation, loss of situational awareness and spatial orientation, and skill loss. These contribute to human errors, accidents, and loss of trust in the automation. Modern autonomous systems are, and will remain, dependent on the development of successful approaches to human-autonomy teaming.

For these reasons, companies have started working on human supervision of autonomous vehicles. Such supervision ranges from simple remote operation (when the vehicle doesn’t know what to do, a remote driver takes over and drives) to remote supervision (the vehicle gets high-level commands such as “stop,” “slow down,” or “overtake” from the remote driver). While this human involvement solves some problems, it creates new ones: What happens if the remote driver makes a wrong decision? Who is responsible? How should one deal with delays associated with sending information to the remote driver and commands from the remote driver back to the vehicle?

Cognitive Engagement in Autonomy

Human supervision of autonomous vehicles is not a new concept, but several areas require development. Humans are very good at understanding context, a capacity that is lacking in current autonomous vehicles, and it remains an open question whether context can be inferred by a remote driver.

Augmented/virtual reality, along with sound and haptic sensations, will play a key role for the remote driver to understand context. Similarly, understanding human emotions is important. The emotional state of the passenger determines how s/he reacts and behaves in an autonomous vehicle.

When humans act as passive monitors of autonomous driving, it is inherently difficult for them to fully understand what is going on because of their lower level of cognitive engagement. There is a clear need to understand the features that influence the human cognitive processes involved in successful oversight, intervention, and interaction with automated systems.

Transitions may be ineffective, even dangerous, if the automation suddenly passes control to the human ­operator who cognitively may not be ready to take over. This means that in addition to situational awareness of the environment, the vehicle requires situational awareness of the passengers. This demands a real-time assessment of human cognition, accounting for the difference between discrete cognitive tasks associated with human intervention, which require more conscious attention sporadically, and continuous manual control, which generally requires lower attention over an extended period.

Quantum Probability to Model Human Cognition

The general mathematical structure of quantum probability and artificial intelligence provides an engineering approach that not only is applicable to any domain that has a need to formalize uncertainty and ­probability but also can be formally applied to human cognition. Quantum probability theory provides a method to consistently convey contextuality between any combination of autonomous system components and operational environments. In addition, the mathematics of quantum probability may be relevant to the contextual ­phenomenon of trust.

In applying the well-structured machinery of quantum probability to human cognition we do not wish to imply that the human brain and psychological ­processes have a quantum nature. We simply suggest that engineers may take a quantum-like modeling approach to assessing human cognition: context can be modeled to a great extent, and quantum probability may better describe and explain the human cognitive state.


Autonomous vehicles will save lives and change the way people work and live. But for these and other autonomous systems to truly and safely succeed, they need to “understand” the context of the situation in which they operate, recognize the emotions of the passengers, and safely work alongside other vehicles and humans. Without these, a fully autonomous system will never be safe and effective.


Endsley MR. 2017. From here to autonomy. Human Factors 59(1):5–27.                                                                                                         


About the Author:Srikanth Saripalli is a professor of mechanical engineering and the Gulf Oil/Thomas A. Dietz Career Development Professor II, and James Hubbard Jr. (NAE) is the Oscar S. Wyatt Jr. ’45 Chair I Professor, both in the J. Mike Walker ’66 Department of Mechanical Engineering at Texas A&M University.