Charles Stark Draper Prize for Engineering

2022 Draper Prize Acceptance Remarks

Dr. David A. Patterson
2022 Charles Stark Draper Prize Winner
Acceptance Remarks

President Anderson, members of the Draper family, esteemed coawardees, and distinguished guests:

Steve Furber, John Hennessy, Sophie Wilson, and I are very grateful to the National Academy of Engineering for this prestigious award and to the panel of our colleagues who selected us. We were thrilled to receive it.

To understand our contribution, we’ll first explain some jargon. When software talks to hardware, it uses a vocabulary called an instruction set. Examples of words or instructions in the vocabulary are also on the keys of a calculator: add, subtract, multiply, and so on. Programs consist of millions of these simple instructions executed billions of times.

When Hennessy and I were assistant professors in 1980, conventional wisdom was that many information technology problems were due to computer vocabularies being too low level. These instruction sets supposedly placed a large burden on programmers, leading to software bugs and failed software projects. The prevailing philosophy was to raise the level of abstraction to create vocabularies of complex instructions to reduce the gap between people and machines.

In the 1970s, microprocessors were found only in home appliances. Hennessy and I believed in Moore’s law, the remarkable prediction that the number of transistors per chip would double every year or two. We were convinced that microprocessors would become the foundation of all computing. The question was, what was the best vocabulary or instruction set for these rapidly improving microprocessors? Should it change from the large computers that powered the industry of the 1970s? We each received funding from the Defense Advanced Research Project Agency to tackle this critical problem.

To understand this vital question about the interface between software and hardware, we need more context. It was so tedious to program in the low-level machine language that computer pioneers soon invented high-level programming languages that were easier to use plus programs that translated from high-level languages to machine language, called compilers. The 1993 Draper Prize went to John Backus for developing the first programming language (FORTRAN) and for its compiler.

While high-level languages were popular for many tasks, conventional wisdom was that compilers were too inefficient to use for some software, such as operating systems. The success of the UNIX operating system from Bell Labs in the late 1970s, written in a high-level language, changed people’s minds.

Regarding instruction sets for microprocessors, the programmers’ burden of writing in low-level machine language was no longer an issue. Instead, the question was, can compilers produce efficient programs for a given vocabulary?

We called the large computers with vocabularies that followed conventional wisdom “complex instruction set computers,” abbreviated CISC and pronounced “sisk.” We proposed that microprocessors should instead use vocabularies with simple instructions, which we called “reduced instruction set computers,” abbreviated RISC and pronounced “risk.” Think of CISC as having many polysyllabic words while RISC has monosyllabic words. Hennessy and I thought it would be easier to build microprocessors that understood RISC vocabularies than CISC vocabularies and easier for compilers to generate RISC programs than CISC programs.

The question then was whether RISC or CISC was faster? While a program might have to read fewer instructions using CISC—since the instructions were more sophisticated—that might mean that the average instruction would take longer to read and execute than a RISC instruction. The analogy is that a page filled with polysyllabic words might take longer to read than a page of mostly monosyllabic words.

This seemingly clear engineering question was emotion packed in the computer design community, which we call “computer architecture.” The CISC advocates believed that RISC was a step in the wrong direction, and would add complexity to software. The RISC advocates thought that was an out-of-date argument and that compilers could hide those details so that programmers couldn’t tell the difference. While Berkeley and Stanford are rival institutions in the Bay Area, Hennessy and I both realized that we should join forces to argue for RISC.

An international conference held in Silicon Valley in 1982 organized the first of several debates on RISC versus CISC. Heated conversations continued into the hallways afterwards and lasted long into the evening at that event, with several more debates over the next year or two.

To answer whether RISC or CISC is better, we needed to discover the ratios of the average number of instructions a program needs to read—CISC should be fewer than RISC—versus the average time to read one instruction—RISC instructions should be faster than CISC instructions. We ultimately discovered that RISC reads a few more instructions (maybe 30% more) but reads them so much faster (maybe 5 times faster) that the net result is that RISC is 3–4 times faster than CISC. Moreover, RISC microprocessors needed less hardware and power than CISC microprocessors. This efficiency advantage would prove to be an unbeatable edge as computers moved off desktops and into people’s hands and from devices plugged into the wall to battery powered.

Max Planck said that scientific truth does not triumph by convincing opponents and making them see the light, but that science advances one funeral at a time. Fortunately, in computer architecture, there is a companion industry that tests our ideas in the marketplace, so we don’t have to wait for funerals to change the field.

For example, in 1983 across the pond in Cambridge, England, Steve Furber and Sophie Wilson decided to build a new microprocessor for the Acorn personal computer, which was popularized by a BBC television series teaching computer literacy. Furber and Wilson were inspired by the work at Berkeley and Stanford to create an instruction set for that microprocessor that they dubbed the Acorn RISC Machine, abbreviated ARM.

As they tell the story, they had two advantages: (i) no money and (ii) no engineers. The very low resources available to the project made simplicity the highest imperative among the competing design criteria, which matched perfectly with the RISC philosophy of simple instructions. When the ARM1 debuted in 1985 as the first commercial RISC processor, it was faster than any microprocessor on the market.

Apple visited Acorn in 1990, showing interest in using ARM for its new Newton handheld computer, an early forerunner of the iPhone. Only a RISC processor could meet the performance, power, and cost constraints of the Newton. The Acorn company needed to reduce costs, so it agreed when Apple insisted that ARM be spun out as a separate jointly owned company. ARM Ltd. was thus founded as a joint venture with Apple in 1990, rebranding the ARM acronym as the Advanced RISC Machine instead of the Acorn RISC Machine.

Although the Newton wasn’t successful, the same characteristics that made ARM attractive to Apple for the Newton also made it popular in chips for cellphones. At that time, Nokia was the leading supplier of cellphones, so the selection of ARM for the Nokia GSM phone was a major boost.

The Nokia experience helped ARM understand the needs of the chips that are at the heart of cellphones—known as “system on a chip” since so much is packed into a single phone microchip—which put ARM in position to ride the explosion in popularity of smartphones, tablets, and other embedded computers for the next 20 years. Today, 99% of all processors shipped are based on RISC; more than 200 billion chips have been shipped with ARM processors. That means that so far more than 25 RISC chips have been delivered for everyone person on the planet. We think the simplicity of RISC meant it was more efficient in use of silicon and power, which made it the winner.

Wrapping up our story, computer architecture is a team sport, so the Draper Prize is recognition for our former students and wonderful colleagues and collaborators at Acorn, ARM, Berkeley, and Stanford. We also thank our families and especially our spouses—my wife Linda, John’s wife Andrea, Steve’s wife Valerie, and Sophie’s partner John—for half a century of support over our long careers. We certainly couldn’t have helped make these contributions without them standing by our sides.

Let me close by thanking you once again for this high honor.