In This Issue
The Bridge: 50th Anniversary Issue
January 7, 2021 Volume 50 Issue S
This special issue celebrates the 50th year of publication of the NAE’s flagship quarterly with 50 essays looking forward to the next 50 years of innovation in engineering. How will engineering contribute in areas as diverse as space travel, fashion, lasers, solar energy, peace, vaccine development, and equity? The diverse authors and topics give readers much to think about! We are posting selected articles each week to give readers time to savor the array of thoughtful and thought-provoking essays in this very special issue. Check the website every Monday!

Artificial Intelligence: From Ancient Greeks to Self-Driving Cars and Beyond

Monday, March 1, 2021

Author: Achuta Kadambi and Asad M. Madni

Artificial intelligence (AI) is more than 2000 years in the making, dating back to the ancient Greeks. To protect his island from pirates, it is said that the first king of Crete, Minos, received an unusual gift from Hephaestus, the Greek god of invention and blacksmithing: a bronze robot known as Talos. Like clockwork, Talos was conceptually programmed to circle Crete thrice daily, throwing stones at nearby ships (Mayor 2018).

AI relates to a form of execution demonstrated by machines that traditionally has been associated with humans or animals. The ancient robot Talos defended an island—an action ordinarily performed by humans. The self-driving cars of today seek to replace a human driver. These examples, both ancient and modern, fall under the realm of “weak AI” that is pre-programmed to address tasks that would have been given to a human.

If AI has been here all along—from Talos to self-driving cars—where will the field go next? The untapped future of AI, where revolutionary progress awaits, lies in “strong AI,” where machines teach humans. When humans learn from such machines it is possible to receive unexpected insights that yield a change in practice.

AI as a Tool of Scientific Discovery

One future of strong AI lies in scientific discovery, a disruptive tool to unblock stagnated fields of science. In fact, this is a field where AI must be used. Where humans can apply only the same known techniques in their arsenal, the unexpected insights from AI might be the wiggle needed to get the wagon wheel out of the rut.

To see the impact of AI on scientific discovery, consider the field of physics. The past 30 years have seen little progress on fundamental questions such as how to explain the wave function collapse (von Neumann 2018). Part of the challenge is that physical observations have become both much more expensive to collect (so-called big science) and difficult to interpret by humans. From Newton to Einstein, there has been a remarkable jump in the complexity of the observations required to validate a theory.

But the modern physicist has something that neither Einstein nor Newton had: ever increasing computational power. This motivates a new paradigm for physics, which we call artificial physics. The artificial physicist can operate in a way that is almost contradictory to a human. Where a human can test a small set of curated theories on a sparse set of data, a machine can test a huge number of combinatorial possibilities on massive datasets. It is a radical change in approach, and one that may yield radically different results.

Kadambi figure 1.gif

Figure 1 illustrates conceptually a computer program that can rediscover Einstein’s famous equations. We have not yet observed a technology that can automatically intuit these equations—one of the challenges is that Einstein’s equations are a human-interpretable construct—but a solution might build on work in symbolic equation generation (Schmidt and Lipson 2009).

However, the road ahead to scientific discovery is not easy. Human engineers and computer scientists will have to create the artificial physicist. Interpretability will be a challenge. How is it possible to ensure that the output equation of an artificial physicist meaningfully maps to what humans can interpret?

To see why creating artificial physicists is difficult, let us consider a simplistic example of building an AI engine to discover the laws of projectile physics. The goal of our AI engine is to observe numerous videos of projectile motion and eventually elucidate the textbook laws of projectile physics. Unfortunately, this problem is very difficult: the AI engine does not know what the scene parameters are (e.g., velocity, gravitational constant). It needs to learn such parameters as well as the governing equation (e.g., a parabola).

In our initial tests, we ran into a situation that deep learning practitioners would be familiar with: the neural network returns a symbolic expression that is accurate in predicting projectile motion, but it is based on artificial parameters (i.e., latent variables). These latent variables are not indicative of actual scene parameters, like velocity or gravity.

Frontiers to Be Explored

The future of AI lies in grappling with these nuanced challenges. There are multiple frontiers that could be explored.

  • Interpretability. If a machine is to teach humans new insights, both partners must speak the same language. Imagine a hybrid team of two physicists: one is an artificial intelligence, the other a human.
  • Novel algorithms and architectures to implement AI. Today, neural networks (“deep learning”) are the dominant approach to implementing weak AI. However, such methods are preprogrammed rather than self-thinking.
  • Unblocking traditional fields—not just physics but chemistry, medicine, and engineering. The word choice of “unblocking” is deliberate. It is one thing to use AI as a tool to augment human performance in a field—much as computers help an author searching for a word definition. It is entirely different to have the AI drive the research field in unexpected and meaningful directions.

    An example of unblocking in action can be found in the optical sciences. Progress in optical design long held that Fourier-coded apertures were optimal (Nugent 1987). With the advent of AI, optical scientists have been successfully using AI algorithms to create unexpected aperture masks that depart from—and -outperform—Fourier masks.

For thousands of years humans have been teaching AI to do our chores. It might be time that we let AI teach us how to innovate in new and unexpected ways.

References

Mayor A. 2018. Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. Princeton University Press.

Nugent KA. 1987. Coded aperture imaging: A Fourier space analysis. Applied Optics 26(3):563–69.

Schmidt M, Lipson H. 2009. Distilling free-form natural laws from experimental data. Science 324(5923):81–85.

von Neumann J. 2018. Mathematical Foundations of -Quantum Mechanics (new ed). Princeton University Press.

About the Author:Achuta Kadambi is an assistant professor of electrical and computer engineering and leader of the Visual Machines Group at UCLA. Asad Madni (NAE) is a distinguished adjunct professor and distinguished scientist of electrical and computer engineering at UCLA and faculty fellow, Institute of Transportation Studies.