In This Issue
Fall Issue of The Bridge on Nuclear Energy Revisited
September 15, 2020 Volume 50 Issue 3
The desire to reduce the carbon intensity of human activities and strengthen the resilience of infrastructure key to economic prosperity and geopolitical stability shines a new spotlight on the value and challenges of nuclear energy.

An Interview with . . . Don Norman (NAE), Cognitive Engineer and Author

Tuesday, September 15, 2020

Author: Don Norman

An Interview with . . .Don Norman (NAE), Cognitive Engineer and Author

RON LATANISION (RML): We’re delighted that you’re available to speak with us today, Don. You are a trained electrical engineer who went on to become the founding director of the Department of Cognitive Science and chair of the Department of Psychology at the University of California, San Diego (UCSD). That’s a pretty unusual career for an engineer.

 Photo 1

CAMERON FLETCHER (CHF): And I understand you got your PhD in mathematical psychology. Can you briefly describe what that is?

DR. NORMAN: First let me start by telling you my career history. While I appear to have made radical changes in what I’m doing in the field, in my mind it hasn’t been radical at all, it has been slow, incremental changes. In many ways I’m still doing what I was trained to do as an engineer.

I graduated from MIT as an electrical engineer in 1957. I suspect I was just a middling student. Had we had a yearbook where we voted people most likely to achieve, I probably would have been voted most ­unlikely to succeed.

But I had gotten interested in computers. In those times MIT had analog computers and I did my thesis using them. We didn’t even really understand the difference between analog and digital computers.

I took a job, I think it was at Raytheon, I’m not sure anymore, in Philadelphia, and they said, ‘We’d like you to get a master’s degree. Here’s the application to the University of Pennsylvania and we’ll cover it.’ I filled out the application and a couple months later, to my great surprise, I got a letter from Penn saying they ­wanted to offer me a position as an assistant instructor at a salary of, I think, $7000 a year.

CHF: That was quite reasonable then.

DR. NORMAN: Yes. I did not have any plans then to go to graduate school, but I said, to myself, ‘Gee, why don’t I turn down Raytheon and just go to Penn?’ That’s what I did, and I continued my work in engineering. My specialty was circuit design.

But I really wanted to get into computers. I took an early course in programming that one professor was teaching, but that’s all they had. They said, ‘Stick around, in a year or two we’ll probably have a computer degree and you can be the first student.’ I didn’t want to stick around, though, and I especially didn’t want to go into a PhD program in electrical engineering.

Photo 2

But the Department of Psychology, which I knew nothing about, got a new chair and hired a new ­professor. The new chair, Bob Bush, had his PhD in physics, and the new professor, Duncan Luce, had his PhD in mathematics (and his undergraduate degree in aeronautics at MIT). Bush gave a talk and I thought, ‘That sounds like what I’m interested in.’ So I talked to Bush and he said, ‘You don’t know anything at all about psychology, wonderful.’ He assigned me to work with Luce.

They were developing a field called mathematical psychology. Duncan decided I didn’t know enough math, so he sent me to the math department to take a course in algebra. I had 6 years in engineering math, but that wasn’t enough.

I became really interested in sensory psychology, because studying hearing and seeing and so on was really close to what I had been doing—it was really an engineering problem in many ways.

There was something called the neural quantum ­theory of hearing, which basically said that the perception of loudness was not smooth and continuous: it was made up of many small, discrete steps—it was quantal.

Think of loudness being represented by the rate at which the nerves attached to the hair cells in the inner ear fire: increase the sound intensity by a tiny amount and, ping, the nerves fires with an extra impulse. This idea had some credibility, having been studied by a number of famous people in the Harvard psychology department, such as Georg von Békésy, who eventually got a Nobel Prize for his discoveries about hearing; George Miller, one of the founders of the study of cognition in psychology; and Ulric Neisser, who coined the term cognitive psychology.

The ear is incredibly sensitive—it can detect ­Brownian motion. As I worked on my thesis, I was having really weird results. I realized that I could make sense of them if there was a decision threshold for the people listening to decide whether the sound they heard was louder or not: if the sound was above the threshold, they said, ‘yes, it’s louder,’ otherwise they said ‘no.’ My breakthrough came when I realized ‘What if it was probabilistic decision making, where the threshold varied during the experiment, sometimes set at a 1 quantum change and other times at a 2 quantum change?’ This produced a set of weird, nonmonotonic curves of loudness as a function of intensity—which matched my data. And so that was my PhD thesis. The first person who decided to publish it was a mathematical psychologist named Dick Atkinson.[1]

I was a graduate student for only about 2 years in psychology at Penn. I almost flunked out, because most of it was memorization of people’s experiments, which I thought was meaningless. At MIT I had learned you didn’t have to memorize stuff, you had to understand the principles and then you derived the answer.

But for the areas in psychology where derivation was possible—like psychoacoustics, trying to understand how the ear works and so on—it was engineering. So there I shined. I actually taught some of the advanced classes in psychology while flunking out of the beginning classes.

RML: Selective application of your intellect, is that what it is?

DR. NORMAN: Yes, in fact that has carried through to today. When I was accepting graduate students, if I had applicants with straight As I didn’t want them. I wanted somebody who had done more experimentation and taken a few courses they had done badly in. I took a graduate student who had flunked out of Berkeley and I had trouble convincing UCSD to let me accept him. He turned out to be one of my best graduate students. He was flunking because he was bored.

Anyway, when I was finishing my thesis Duncan said, ‘Okay, where would you like to go now?’ We discussed various possibilities and concluded I should go to either MIT or Harvard. He sent me to MIT and to Harvard to interview them. I visited MIT and there was some really neat stuff going on in areas I didn’t know about; at Harvard it was completely different. I decided to go to Harvard because I’d learn more.

At that point George Miller and Jerry Bruner, two of the founding fathers of modern psychology, headed something called the Center for Cognitive Studies. I didn’t even know what the word cognitive meant. But on my first day I met a few people and immediately ­started a big argument with them. We were talking about human memory, and I said, ‘Of course there’s temporary memory and longer-term memory, it’s obvious.’ Well, it wasn’t obvious, that was in fact groundbreaking news in the psychology community.

I learned a lot and became a fan of William James (who taught at Harvard; he died in 1910). The early American psychologists like James were doing wonderful work, but it got killed by the behaviorists who didn’t want to study the brain or the mind, because ‘if you can’t see it you can’t study it, we just study what can be measured.’

At Harvard I got interested in and started experimenting in memory. I worked with Nancy Waugh—the woman I had this big argument with. (On the East Coast arguments are considered the way you work: When we argue we’re developing ideas. When I got my job in California and continued in that fashion, people took me aside and told me to stop.)

So Nancy and I argued, and we published about seven or eight papers together, including one that we called “Primary Memory” (we should have called it short-term memory), after William James. I published that paper in my first few months as a postdoc, in the Psychological Review, the best journal in psychology. We did a whole bunch of experiments—in fact George Miller bought me a computer to run the laboratory, so our lab had one of the very first digital computers.

When I was at MIT I had to take a couple of courses in the so-called humanities, and one of them used George Miller’s textbooks. It was a course on how people heard and remembered things, especially in noise. One of the things he pointed out was that if I’m listening to random words in noise it’s very hard for me to understand them, but if they’re the same words in a sentence it’s much easier to understand them. That’s a powerful finding. George is famous for his paper “The Magical Number Seven, Plus or Minus Two,” about limitations of human memory and how it shows up in many different situations.

Also when I was at MIT I took a course by this new guy. My roommate insisted I should take this course because the guy was supposed to be pretty good. So I took the course by this new professor, his very first course and he used his thesis as the textbook. His name was Noam Chomsky.

There were two parts to the course. One was the philosophy of language and how it should be studied. The other was his formal language for describing different languages. It was a finite state grammar. At the end of the course he told the students that when he taught the course at Penn the students loved the philosophy part but were lost and didn’t understand the finite state grammar; when he taught us we didn’t understand the philosophical part, but thought the finite state grammar was easy.

Anyway, at Harvard they had a lunch every week with some of the top philosophers and people from the Center for Cognitive Studies. Noam Chomsky came every week. Basically everybody who was anybody moving up came through Harvard, so it was a really great education.

After a year George got me an appointment in the Psychology Department. When I was introduced to the faculty, BF Skinner, a behaviorist and the most famous psychologist in America, stood up and denounced me and my field. Welcome to Harvard.

I also started working with a guy in the MIT Psychology Department, Wayne Wickelgren. We discovered signal detection theory, which was introduced into psychology by Dave Green and John Swets (they had worked at BBN together). JCR Licklider was there; he worked for ARPA and set up a lot of the time-shared computers and the beginnings of the internet, and he was a psychologist. In fact, he had studied the neural quantum theory as well.

In my work in signal detection theory I was thinking you have to get this entire receiver operating characteristic curve, and it takes a lot of work, and it’s not always easy to get the whole thing. Then one day I realized I could do a lot with just one data point. That’s when you ask somebody to remember something and later you show them items and ask ‘Which one did I show you before?’ When they select one, they can have a “hit” (they say yes to the correct object) or a “false alarm” (the object they pick isn’t one I showed them). The hit and false alarm rate is what’s important in detection theory. You’re listening to a weak signal, but it’s very noisy and sometimes the noise sounds like a signal. So you have to have a criterion, to say ‘if it’s above that criterion I’ll say it’s a signal, and if it’s below I’ll say it isn’t.’ If you make the criterion very high you won’t have any false alarms—but you’re also going to miss a lot. If you make it very low, you won’t miss any, but you may have lots of false alarms.

RML: This sounds like something that might be relevant to today’s interest in covid-19 sensors and detectors and testing. Is there any connection?

DR. NORMAN: Yes, absolutely. Detection theory is widely used. In fact, it explains why some of the tests being used today give false positive results, even though their main failing is that they give false negatives. To avoid false negatives, they adjust the threshold for ­decision making to a low value, which decreases the false negatives, but at the expense of an increase in false positives.

Back to my discovery. Suppose I have two different experimental conditions. I get a higher hit rate in one than in the other, but also a higher false alarm rate. How do I know whether one experimental condition is ­better than the other? I realized I could do a geometric, ­graphical diagram. With only a single data point (hit/false alarm rate) I could divide the plot of hits versus false alarms into three regions: one where any point would be superior, one where it would be inferior, and a third region that is undecided. I published that paper in Psych Review.[2]

Then I published a few other papers with some friends where we said, just take the area under that curve—it’s a very simple curve, it starts at 0,0, goes through the one point you have, and then goes to 1,1—and it’s a good measure of performance.

That was around 1963 or ’64. About a year ago I discovered that lots of medical researchers use the area under the operating characteristic that I coinvented. I looked at the papers, and they don’t cite our work. But they didn’t cite anyone. So I started looking for the earliest papers and yes, there was my paper. When something is commonly used, people just take it for granted and nobody remembers where it came from.

RML: I can see, given what you’ve described, that your experience in electrical engineering and then in psychology and cognition all comes together. I now understand your NAE citation: you were elected “for development of design principles based on human cognition that enhance the interaction between people and technology.”

In preparation for our conversation today, I watched the video “Norman Doors.” It’s about doors that people have trouble opening, and it’s a wonderful description of the interaction between technology and human beings.

You mentioned having read Henry Petroski’s interview,[3] have you met him?

DR. NORMAN: Yes, I was in his house and admired his books. But he was annoyed because he was trying to show me the bookshelves, not the books.

RML: You and Henry have the same kind of spirit, I can see that. Have you known Henry for a long time?

DR. NORMAN: I discovered his book To Engineer Is Human, and I think I’ve now read every book of his. I also read his articles in American Scientist. So I started corresponding with him, and one day I was going to be at Duke so I wrote and asked him if I could come visit.

By the way, I didn’t do that video on Norman doors—somebody just called me up one day—but it’s really good. It’s what I say we should teach. First of all, it’s an interesting topic, and people immediately say, ‘I’ve had that kind of trouble with doors.’ Also I give some fundamental principles about why they have trouble. It allows people to see that there’s a science behind this.

RML: Let me broaden that out a little. I think everything we do as engineers should have a social purpose. Doors may not work as well as we would prefer, but they do have a purpose.

What is your thinking about autonomous vehicles—automobiles, planes, people are talking about all sorts of autonomous vehicles?

DR. NORMAN: Actually I’ve written a bunch of papers on that topic and I’ve worked with Nissan on autonomous vehicles, and I’m on the advisory board of a Toyota research group, and we have a grant from Ford Motor Company to look at autonomous vehicles. But before I get to that, I want to close my Harvard experience.

I left Harvard and was offered a job at UCSD with Dave Green, who did a lot of work in signal detection theory. We started a lab and I got really interested in human attention, studying how people were able to listen to one voice out of many, or if they fail to hear something else, what’s happening, and so on. From there I got interested in the errors that people make, human error—not speech errors but errors of action.

I partnered with two other newly hired faculty: Peter Lindsay and David Rumelhart, so we called ourselves “the LNR Lab.” And of course we bought computers to control our experiments. At Harvard we had a Digital Equipment Corporation PDP-4. At UCSD, we started with a PDP-9, then a PDP-15, and finally a VAX.

I asked my students and everybody I knew if they made an error—say, flipping the wrong switch, or going to work on a holiday—to write down what error they made and how they detected it. I collected these errors and sorted them and came up with a descriptive categorization of them, along with a theoretical framework. And again I published a paper in Psych Review.[4]

I started the paper by saying, ‘One day one of my colleagues told me he must be getting old, he was making errors. He said, he went to the liquor cabinet to pour himself a drink, took out a bottle of scotch and a glass, poured some scotch into the glass—and then put the glass back into the cabinet and walked off holding the bottle. I said, “I don’t think you’re getting old, I do that myself.”’ That’s how I started the article, and I sent it off to the journal editor, Bill Estes, ­another mathematical psychologist. About a day later the paper was rejected, and the rejection letter was “Come on, Don.”

CHF: That’s all he wrote?

DR. NORMAN: That’s all he wrote. I took out that anecdote, sent it back, and it got accepted with zero revisions.

I have another story like that, too. I wrote a paper on human attention; the theory was that when you ­attended to the words one person was saying, you could not attend to what anyone else was saying. Lots of ­studies showed you had no memory for what others said (what we called the unattended channel). I showed that this was wrong: there was a short-term memory for the unattended words, but you could show this only if you tested immediately after they had been spoken. I demonstrated that there was a short-term memory for ­unattended material.

In my paper, I said that although all the existing theories of attention (including my own) had difficulties with this result, I showed how each one could be modified to accommodate the results.

I sent it off to the Quarterly Journal of Experimental Psychology, the best British journal at the time, and it got rejected. They said the conclusion was weak. So I rewrote it and said the result demonstrates that everybody’s theory is completely wrong except mine, because here’s how my theory accommodates the results. And they said, ‘Good. Thank you.’ I thought that was one of the stupidest things. I was trying to be fair to everybody else and they rejected the paper.

Anyway, I was studying error and attention and all these things because they’re closely related, when the Three Mile Island accident happened. I was called in by the Nuclear Regulatory Commission to look at why the operators made such stupid errors. The committee was wonderful, with a number of human factors people.

We said the operators were really intelligent and did a very good job, they made the best decisions they could have made given the information. But if you wanted to cause errors you could not have designed a better control panel than they had at Three Mile Island. It was a design problem. And that made me realize that my background in engineering and psychology was perfect for trying to understand how design works.

RML: What’s an example of the design errors that were so apparent in Three Mile Island?

DR. NORMAN: There were roughly 4000 controls and switches in a nuclear control panel. Engineers would simply figure out what needed to be controlled and what needed to be displayed, take their straight edges, and lay out all the switches in nice long rows and all the displays in nice long rows, in vertical columns and horizontal columns.

Plants are usually built in pairs, so there are two reactors and two control panels. One of the worst design errors was a plant with two control rooms, one for each reactor. But to simplify the wiring, they made the two control rooms mirror images of each other. So an operator trained on one control room would make errors in the other control room, even though the plants were otherwise identical.

When you have a row of identical looking switches and meters, how do you know which is which? Flipping the wrong switch is easy. Operators knew they could get confused. In one power plant with five or six big switches in a row that looked the same but were critically important, we saw that the operators replaced the switch handles with different brands of beer-tap handles so they could see which was which.

RML: Did the design of the control panel materially affect the response at Three Mile Island when the events occurred?

DR. NORMAN: Absolutely.

RML: Have there been changes in the design of the control panels for nuclear plants, or for example in the cockpit of an airliner, which has a similar distribution of switches?

DR. NORMAN: We don’t really know. The answer is probably yes, but there hasn’t been a nuclear power plant put into operation in the United States since that time.

It’s not the same in aviation. Aviation is actually the next step toward autonomous automobiles. In the 1940s, during the Second World War, there was a huge amount of human factors research. One of the common errors was landing an airplane with the landing gear up. The problem was that there were identical looking switches. If you wanted the flaps down, you pushed the button down. If you want the landing gear down, you pushed the switch next to it down. Quite often –

RML: They pushed the wrong thing.

DR. NORMAN: So they changed the switches so you could feel the difference: The flaps switch was a flat plane that felt like the flaps, and the landing gear switch had a wheel at the end of it.

Also in organizing the patterns of the switches they looked at what information a pilot needs to fly the plane and how a person would look at the gauges to figure out the state of the plane. In commercial aviation, the gauges and displays are organized to make it easy for the pilots to scan them quickly and efficiently. Today, commercial aviation is incredibly safe. The field of human factors (now called human-systems integration; NAE Section 8)has numerous, well-documented design principles, none of which went into the nuclear power systems. Every new field rejects the principles learned from other fields, saying ‘oh no, we are different.’ Well, these are principles about human beings, and so they apply to any field where people are involved.

CHF: Since you mention other fields, what kinds of differences have you seen in the adaptation of certain technological fields and designs as opposed to others? For example, what fields are more advanced in accommodating human factor design?

DR. NORMAN: I’m going to start with the most elementary, which is personal workspaces. Let’s start with the kitchen. In the kitchen you put stuff in places (1) that you can remember and (2) where you’re going to need it. You don’t put all the knives in one drawer, you put them depending on what you’re going to use them for. There might be a knife stand for the big knives for doing major cutting and chopping, and you’ll also have kitchen knives, other types of knives, and knives used for silverware. Same with pots and pans—you have a place for them, but with particular ones that you use frequently, you might leave them out. So the space is designed to fit your work style.

And everybody’s kitchen is different. If you go to somebody else’s kitchen to cook for them, you’ll probably have a difficult time because you can’t find anything because their layout doesn’t match your pattern of work.

I look at a lot of craftspeople’s workspaces and at the kind of tools they use and where they place them. There’s a nice anthropological study of how a blacksmith organizes the tools. At the end of the day when the blacksmith cleans up to go home, they carefully arrange a bunch of tools on the floor next to the hearth, because that’s where they want them. When they’re heating up something they want to just reach over and get the hammer, for example.

We follow that same philosophy as we look at other designs. The place that has made the most advances is aviation. They didn’t believe in human factors science at first, but the pilots would complain about where things were, so the industry developed a really good ­philosophy of designing, especially in commercial aviation.

The other field is computers. When the first computers came out, like the first one I programmed, the ­Remington Rand UNIVAC, it wasn’t designed to be used, it wasn’t useful at all, it was really crazy. But home computers, which were understandable and usable, became critically important.

RML: While we’re on the subject of computers and electronics, I think that if people had looked differently at the evolution of the internet, maybe we would find ourselves in a different position today. Is there anything you would have done differently in rolling out the internet so that it serves the purpose that was intended—mainly as an information platform? Today it is used and abused in so many ways.

DR. NORMAN: That’s a good question, and I will answer it. Let me finish on automation and then I’ll get to the internet.

 photo 3

When the Macintosh came out, I said, ‘Finally a computer that works the way we think.’ I brought some of the Macintosh people in to talk about how they had done it, and I discovered some of them had been my students. That got me interested in what was going on in the computer industry, which is eventually why I retired and went off to Apple. Today, computer science understands the importance of designing for people; the specialization is called human-computer interaction (NAE Section 5). I think separating this one area into two sections weakens their impact: we ought to have a new NAE section on human-systems integration. I happen to be a member of both sections 5 and 8, but that is rare. All of us ought to be together.

Back to automation. We’ve known for years that if you’re doing a task where nothing happens for hours and hours, you can’t pay attention. I was studying human error, working with NASA Ames in Silicon Valley, they’re the world experts on aviation safety. I was applying my understanding to aviation, and that’s where I learned a lot about accidents and about how one should design for people. I developed a lot of the ideas there and eventually coedited a book, User Centered System Design,[5] about that and what was going on in the early home computers. That was the first use of the term user centered and also brought out the importance of designing for the system.

In a paper about 30 years ago I said the most dangerous part of automation is when it’s almost automated, because if it’s still manual you have to pay a lot of attention, and if it’s completely automated you don’t have to pay any attention.[6]

But when it’s almost automated, it’s really dangerous, because when something works perfectly for hours and hours, people simply cannot stay alert. They lose what we call situational awareness. Then when something unexpected happens and the automation cannot cope, it can take a long time for people to regain situational awareness and take over properly.

In aviation when pilots are flying along and suddenly the plane starts diving, the first thing they do is say, ‘Oh shit,’ and then they say, ‘What’s going on?’ But if the plane is at an altitude of 30,000–40,000 ft. they have several minutes to figure it out. Commercial aviation pilots are well trained, so most of the time they save the plane.

So now we have automation in the automobile. The problem is that people are beginning to trust the automation: overtrust. Tesla drivers provide a good example. There have been a number of deaths in Tesla autos because the better the automation, the less people will pay attention.

In an automobile drivers are not well trained. Moreover, when the automation fails, the response must be made in a fraction of a second.. At 60 miles an hour,  in 1 second you’ve gone 90 feet. Data show that it takes 10–20 seconds for people to figure out what’s going on and make the right response. That means they’ve gone 1000–2000 feet. Too late.

I’m actually a big fan of automation, because I think driving is dangerous. Instead of 40,000 deaths a year in the United States and 1 million in the world, automation might reduce this by 90 percent. That’s still 4000 deaths a year in the United States, but it is a ­dramatic improvement. I’m not a fan of partial automation, though. And I don’t like ASME’s five levels of automation, because it misses all the subtleties.

You asked me about other problems with technology. One of them is the inability to predict exactly how any technology that is adopted by hundreds of millions of people will be used. The internet is a good example of a wonderful invention that has evolved into a powerful vehicle that nobody in their wildest dreams anticipated.

I lived through the early days of the internet (when it was called the ARPAnet). I’m friends with a lot of the people who did the early design. A major problem today is the lack of security, in part because the underlying infrastructure doesn’t readily support security. Why? Because it was designed for a bunch of people trying to connect their computers so they could share the computer power.

Once, in the early days, a student at UCSD broke into somebody else’s computer and did some damage. There was a big fuss. People kept asking, ‘What should we do?’ Nobody said, ‘Let’s redesign the system.’ I talked to the student, and I said, ‘We don’t do that.’ And he stopped. In those days everybody trusted everybody.

In hindsight, of course we should have done things differently. But at the time it was a bunch of collaborative people, nobody realized it was going to take over the world. So there was no security built in, none whatsoever, and in many ways that was deliberate. Trying to add it afterward is almost impossible.

Besides, it was restricted, you could only use it if you had a DARPA contract, if you worked for government. I remember some company sent out a big advertisement across the internet advertising their product, and wow, they were banned, they were told ‘Get off, you’re not allowed to use this.’ We could not ­imagine that this network could be used for advertising. When the system was designed, we could not imagine sending speech over the connections. Video? No way. ­Deliberate sabotage? Criminals? Malware? Fake news. And if you can’t imagine something, you can’t design a system to protect against it. Want another example of the inability to predict? Think of automobiles. They were going to reduce pollution in cities, which were covered with horseshit. Nobody predicted that automobile exhaust would be a far worse form of pollution. Who could have predicted there would be billions of automobiles?

RML: My thought is that if we leave it to scientists and technologists to develop systems, without involving let’s say social scientists, people with an understanding of human factors and so on, are we not missing an opportunity to think more deeply about how technology interacts with people? When we’re developing engineering systems, and our goal is always to serve society, to make sure that the system serves a social purpose, should we not bring social science into that conversation?

DR. NORMAN: Yes, absolutely. I’ve been making that point to the deans of engineering schools. Why do we do engineering? It’s usually for society and people. So we need to bring these courses into engineering. Engineers should understand these factors, absolutely. But it’s difficult to change engineering culture, because the thinking is ‘we don’t have room for more courses.’ I’m saying it shouldn’t be a different course, it should be taught in existing courses.

Northwestern University’s School of Engineering now requires all engineering students to take a design class as freshmen. For two thirds of the year they do a design exercise for people—they go off to the local hospital system and design things to help patients. This is wonderful. I give great credit to the dean, Julio Ottino (NAE).

Nonetheless it is wrong to say ‘we should always think of the societal consequences.’ Yes, we should, but we will almost invariably fail. We couldn’t have predicted the problems with the internet or the problems of pollution. Herb Simon, a Nobel laureate who was a friend of mine, had this wonderful statement: ‘It’s easy to predict the future, people do it all the time. The hard part is getting it right.’

I know I didn’t predict what would happen with the internet, that it would change everybody’s life. Nobody did, except maybe a few science fiction writers. They’re often the best predictors.

CHF: So much of innovation is caught up with ­unpredicted, unintended consequences.

DR. NORMAN: That’s right, because the technology changes human behavior so that suddenly people are behaving in ways we never would have expected.

CHF: Now that would be a fascinating topic for you to explore, Don—the impact of technology on human behavior in the long term. Not necessarily right now, though, because we’ve run out of time.

RML: Don, thank you very much.

CHF: Yes, this was so interesting.

DR. NORMAN: Great, thank you very much.


This interview took place June 9, 2020. It has been edited for length and clarity.

 


[1]  Atkinson RC. 1964. Studies in Mathematical Psychology. ­Stanford University Press.

[2]  Norman DA. 1964. A comparison of data obtained with different false-alarm rates. Psychological Review 71(3):243–46. And the area paper is Pollack I, Norman DA, Galanter E. 1964. An efficient non-parametric analysis of recognition memory experiments. Psychonomic Science 1:327–28.

[3]  The Bridge 45(1):49–55 (spring 2015).

[4]  Norman DA. 1981. Categorization of action slips. Psychological Review 88(1):1–15.

[5]  Norman DA, Draper SW, eds. 1986. User Centered System Design: New Perspectives on Human-Computer Interaction. Hillsdale NJ: Lawrence Erlbaum.

[6]  Norman DA. 1990. The “problem” of automation: Inappropriate feedback and interaction, not “over-automation.” ­Philosophical Transactions of the Royal Society B 327(1241):585–93.

About the Author:Don Norman (NAE), Cognitive Engineer and Author