In This Issue
The Bridge: 50th Anniversary Issue
December 20, 2020 Volume 50 Issue S
This special issue celebrates the 50th year of publication of the NAE’s flagship quarterly with 50 essays looking forward to the next 50 years of innovation in engineering. How will engineering contribute in areas as diverse as space travel, fashion, lasers, solar energy, peace, vaccine development, and equity? The diverse authors and topics give readers much to think about! We are posting selected articles each week to give readers time to savor the array of thoughtful and thought-provoking essays in this very special issue. Check the website every Monday!

A New Categorical Imperative

Thursday, December 24, 2020

Author: Daniel Metlay

In 1973 the German philosopher Hans Jonas posed the central ethical test for modern technological society. He observed that previously the “good and evil about which action had to care lay close to the act, either in the praxis itself or in its immediate reach,” whereas a new categorical imperative now required that the “future wholeness of Man [be included] among the objects of our will” (Jonas 1973, p. 38).

This essay briefly explores one implication of Jonas’ imperative: Can the technologies that the next 50 years of engineering advancement are likely to spark remain subject to democratic control?[1] Many of those technologies will have a broad reach, increase social complexity, and deliver uncertain consequences. So establishing and sustaining organizations and processes for democratic control may prove to be a significant challenge.

Assessment of Consequences

It requires little imagination to grasp the magnitude of what is in store. Just consider two technologies whose primary capacity, transporting people and goods, is identical: an automobile and an autonomous automobile.

Relatively simple “rules of the road” govern the operation of a traditional vehicle and address a relatively narrow range of consequences that might arise, such as safety and environmental damage. In contrast, the operation of autonomous vehicles is controlled—for -better or worse—by the choices embedded in (whose?) programmers’ codes. Rather than restricted con-sequences, large-scale introduction of autonomous vehicles is likely to have far-reaching ones, including but hardly limited to employment, privacy, security, and liability. If there are issues associated with evaluating the safety and environmental impacts of the automobile, how much more difficult it will be to weigh the wider set of effects linked to autonomous automobiles.

Distributions of power, gender, and status
influence what outcomes
are realized and which groups and individuals
are affected by them.

Democratic control of future technologies requires both epistemic insight and institutional constancy. For the first requirement, compiling a complete catalogue of consequences for any given future technology is likely to be a sizable undertaking. The well-understood availability and accessibility heuristics bias which effects come into immediate focus; analysts concentrate on the technology’s primary capacity, often ignoring “downstream” impacts.

Furthermore, a technology’s advocates naturally emphasize its benefits while diminishing its harms. For opponents, the emphasis is reversed. In general, the effects of technology are in a deep sense socially constructed. Distributions of power, gender, and status, among other things, influence what outcomes are realized and which groups and individuals are affected by them. Thus, in a highly interdependent and tightly coupled world, the more subtle effects of a technology as they reverberate over time and space may be close to incomprehensible.

Challenges of Regulation

If identifying the wide-ranging consequences of future technologies is problematic, sustaining institutions that are committed to constancy—the second requirement for democratic control—is likely to prove at least equally challenging. To illustrate, I concentrate here on the dominant mode of democratic control, formal regulation.

To regulate is to control by rule, a process that clearly necessitates having a causal appreciation of how options affect outcomes. At the most basic level, regulators must possess a thorough knowledge of the technique that undergirds the technology (e.g., the software of the 737 MAX). Moreover, they must recognize, to use another example, how pesticide limits influence the full set of consequences, including income distribution, environmental justice, and the viability of natural habitats.

Historically, the chief complaint about regulators’ behavior has been their vulnerability to capture by the very interests they oversee. Those concerns are likely to be heightened if only because the balancing of opaque, ambiguous, and incommensurate outcomes resists transparent explanation. Thus, as the demands on regulators mount and expectations of them multiply, their constancy will increasingly be threatened.

Balancing Control, Promise, and Complexity

It may be that democratic control of technology has always been questionable. Fifty years ago, the arguments by Charles Lindblom (1965) about the “intelligence of democracy” and the National Academy of Sciences’ faith in a cybernetic model for “technology assessment” (NAS 1969) seemed plausible and reassuring. Both certainly appear less so now and probably will appear even less convincing in the future. So what is to be done?

At the risk of being labeled a modern-day Cassandra or Luddite, there simply is no silver bullet on the horizon. Policymakers and the attentive public will have to acknowledge that democratic control of future technologies is by no means assured.[2] Resources will have to be secured to augment the analytic capacities of advocates, opponents, nongovernmental organizations, regulators, legislators, and judges. Otherwise surprises—both pleasant and unpleasant—will inexorably surface.

Moreover, safeguards will have to be strengthened to protect the competence and responsiveness of the democratic institutions charged with controlling how large-scale, complex, and disruptive future technologies are deployed. It is probably unrealistic, however, to expect the institutions to completely avoid errors. But if they have proactively and aggressively built up a reservoir of trust over the years, their mistakes will tend to be viewed as human and not malevolent.

Scientists, engineers, and others often speak about the “promise of technology.” And rightly so. But for all its validity, that claim typically discounts the unanticipated, the disturbing, and the dislocating “side effects” of technological innovation. In this writer’s view, society has mostly maintained control over how technologies have been executed, albeit sometimes only tenuously. But given the properties of many emerging and future technologies—broad reach, increased social complexity, and uncertain consequences—it remains a very open question whether the same conclusion will be drawn 50 years from now.


Jonas H. 1973. Technology and responsibility: Reflections on the new tasks of ethics. Social Research 40(1):31–54.

Lindblom C. 1965. Intelligence of Democracy. New York: Free Press.

NAS [National Academy of Sciences]. 1969. Technology: Processes of Assessment and Choice. Prepared for the Committee on Science and Astronautics, US House of Representatives. Washington: National Academy Press.

[1]  By democratic control I mean the entire panoply of referenda, laws, regulations, and court decisions as well as nongovernmental actions such as social movements.

[2]  The climate change debate certainly reinforces this point.

About the Author:Daniel Metlay is a senior fellow at the B. John Garrick Institute for the Risk Sciences, University of California, Los Angeles, and senior visiting scholar at the International Institute for Science and Technology Policy, George Washington University.