In This Issue
The Bridge: 50th Anniversary Issue
January 7, 2021 Volume 50 Issue S
This special issue celebrates the 50th year of publication of the NAE’s flagship quarterly with 50 essays looking forward to the next 50 years of innovation in engineering. How will engineering contribute in areas as diverse as space travel, fashion, lasers, solar energy, peace, vaccine development, and equity? The diverse authors and topics give readers much to think about!

The Future of Artificial Intelligence

Monday, March 1, 2021

Author: Rodney A. Brooks

In the proposal for the 1956 Dartmouth summer workshop on artificial intelligence (AI)—the first recorded use of the words artificial intelligence—the authors made clear in the second sentence that they believed machines could simulate any aspect of human intelligence. That remains the working assumption of most researchers in AI and engineers building deployed systems, but the full generality implied is still decades or even centuries away.

Current AI Capacity

Seeing AI systems work well at one narrow aspect of human intelligence often misleads people into thinking that all aspects of human intelligence are equally well matched by AI systems. But that is not the case. In particular, the aspects of intelligence that let humans, and indeed most animals, maintain an independent existence and pursue their own agendas while also tending to their shelter, safety, and energy needs have largely been ignored by mainstream AI research for the last 65 years.

The summit of achievement in making independent artificial beings may be found in the high-end models of the Roomba line of robot vacuum cleaners, which not only return to a base to recharge but also empty their waste bins into a larger static container there and have self-cleaning brushes. This is probably the limit of “free will” for AI systems for the foreseeable future as it is not an area of active research.

This means that despite worries expressed in the press and in recent books and essays, we are not going to see Hal-like systems, making independent decisions or doing things people do not want them to do. Nor will deployed AI systems be called on to reason about the moral action to take in some circumstance. However, engineers should, as is always expected, be ethical in their design decisions.

Patterns of AI Approaches and Applications

Since 1956 AI has been characterized by having, at any particular time, tens of diverse approaches to problems and aspects of intelligence with no real consensus on how everyone should proceed. There is no “standard model” to follow or to argue about.

Rather, there has been a constant pattern of a hot new idea emerging from the pack because of unexpected success in applications to some set or another of problems. This new idea then has high expectations developed for it while at the same time useful practical systems are being built on it. Then there is a slowing in progress. Before long, another idea emerges from the pack, and the pattern repeats. Often, old ideas came back for a second or even third time. These past patterns are not a predictor of future patterns, but if they continue it will not be a great surprise.

An incomplete, but roughly chronologically ordered, list of such hot ideas might include tree search, backward chaining, first-order logic, constraint-based systems, frames, the primal sketch, rule-based systems, expert systems, case-based reasoning, behavior-based systems, Q-learning applied to reinforcement learning, qualitative reasoning, genetic algorithms, regularization theory, support vector machines, graphical models…a well-established pattern.

Neural Networks and Very Large Datasets

We are on the third wave of neural networks, one of the topics at the 1956 workshop that continued to be investigated until the late 1960s. Their return in the ’80s and ’90s resulted in many applications; for example, small neural network systems have been reading the bulk of handwritten zip codes on US mail in high-speed sorting machines since then.

Recently much larger neural networks with the ability to represent much more complex separating manifolds have found many more applications. The success of these deeper networks has come from increases in computer power, very large datasets for training, and improvements in training algorithms. The far-field speech recognition systems that people interact with in their homes or on their smartphones are the most visible beneficiaries. In these and other applications, engineered front-end feature processing systems have been replaced by the first few layers of today’s “deeper” neural networks.

Robot vacuum cleaners probably represent the limit of “free will” for AI systems for the foreseeable future as it is not an area of active research.

Proponents are exploring the hypothesis that larger training sets will enable engineering to be further replaced by learning. Others think that a hybrid, with representations of innate knowledge, will be needed in order to squeeze more capability out of networks. Many argue that what is most probably needed is an explicit representation of knowledge that matches the way humans explain their reasoning, especially if we want to get better natural language understanding than current capabilities. Speech-based assistants are good at identifying words, but not yet very good at understanding complex sentences.

Since 2012 deep learning has led to the construction of a huge physical and software infrastructure, supported by enormous cloud computing data centers. They employ graphics processing units, originally developed for video gaming and over time evolved to be attuned to the computations needed for training deep networks.

Very large datasets have been built, some scraped from information on the web, and some created by large groups of paid workers around the world. If researchers working on new techniques find ways to exploit these assets, then they may be able to rapidly bring their techniques to have big practical impacts. Or some other technique may turn out to be the next hot idea without that infrastructure.

The Next Decades of AI

Over the next few decades we will see more and more places where AI systems provide support to humans in carrying out tasks that are important to them, whether in their work, their communications, or their entertainment or play. AI systems will be widely deployed as subsystems that make bigger projects more reliable, more user friendly, and more efficient.

Although AI has come to the attention of the general public in only the last decade, it has been around for 65 years. It is not as far along as many fear, and it is probably not as far along as society needs it to be. There are plenty of challenges and opportunities ahead.

About the Author:Rodney Brooks (NAE) is the Panasonic Professor of Robotics (emeritus) at MIT.