In This Issue
Engineering Challenges
September 1, 1999 Volume 29 Issue 3

Five Worthy Ways to Spend Large Amounts of Money for Research on Environment and Resources

Wednesday, December 3, 2008

Author: Jesse H. Ausubel

 

I envision a large, prosperous economy that treads lightly and emits little or nothing.


The first decade of my career I carried briefcases for William A. Nierenberg (NAE), Robert M. White (NAE), and other leaders in formulating such major research programs as the World Climate Program and the International Geosphere-Biosphere Program. Working for the National Academies of Sciences and Engineering, I learned how grand programs are born, what they can do, and what they cost. Spurred by an invitation from the San Diego Science & Technology Council and hoping to rally my colleagues, I here tell my top five "worthy ways" to spend large amounts of money for research on environment and resources.(1) My top five span the oceans, land, human health, energy, and transport. All demand teams of engineers and scientists. Let’s

  1. count all the fish in the sea;
  2. verify that the extension of humans into the landscape has begun a "great reversal" and anticipate its extent and implications during the next century;
  3. assess national exposure of humans to bad things in the environment;
  4. build 5-gigawatt zero-emission power plants the size of a locomotive; and
  5. get magnetically levitated trains (maglevs) shooting through evacuated tubes.

These worthy ways cohere in the vision of a large, prosperous economy that treads lightly and emits little or nothing.

Marine Census
In December 1998 for a week I sailed above the Arctic Circle in the Norwegian Sea, precisely counting herring in the dark. Over the decades of the Cold War, Norwegians honed their submarine acoustics, listening for Soviet vessels motoring out of Murmansk. This technology, integrated with others, makes possible the first-ever reliable worldwide Census of Marine Life. I prefer to say "Census of the Fishes," conjuring beautiful images to Everyman. But humanity needs to understand the diversity, distribution, and abundance of squids, jellies, and turtles, too, and so, deferring to accurate colleagues, I call this first worthy way the Census of Marine Life. But let me make the case primarily for fishes.

Many of the world’s leading ichthyologists gathered at Scripps Institution of Oceanography in La Jolla, Calif., in March 1997 to consider what is known and knowable about the diversity of marine fishes (Nierenberg, forthcoming). The meeting attendees reported how many species are known in the world’s oceans and debated how many might remain undiscovered. Known marine species total about 15,000. The meeting concluded that about 5,000 yet remain undiscovered. I find the prospect of discovering 5,000 fishes a siren call, a call to voyages of discovery in little explored regions of the Indian Ocean, along the deeper reaches of reefs, and in the midwaters and great depths of the open oceans. The adventures of discovery of Cook, Darwin, and the explorers of Linnaeus’s century are open to our generation, too.

The urgency to cope with changes in abundance of fish amplifies the adventure of discovery. In August 1998 at the Woods Hole Oceanographic Institution, the concept of the census was advanced at a workshop on the history of fished populations, some 100-200 of the 15,000-20,000 species. The assembled experts estimated that the current fish biomass in intensively exploited fisheries is about one-tenth of that before exploitation (Steele and Schumacher, forthcoming). That is, the fish in seas where commercial fishers do their best to make a living now weigh only 10 percent of the fish they sought in those seas a few decades or hundred years ago.

Diverse observations support this estimate. For example, the diaries of early European settlers describe marvelous fish sizes and abundance off New England in the 1600s. From Scotland to Japan, commercial records document enormous catches with simple equipment during many centuries. Even now, when fishers discover and begin fishing new places, they record easy and abundant catches, for example, of orange roughy on Pacific sea mounts. Also, scientific surveys of fish stocks indicate fewer and fewer spawning fish (mothers), compared to recruits (their offspring). The ratio of spawners to recruits has fallen to 20 percent and even 5 percent of the level when surveys began. A great marine mystery is what has happened to the energy in the ecosystem formerly embodied in the commercial fish.

These two dramatic numbers, the 5,000 undiscovered fishes and the lost 90 percent of stocks, suggest the value of a much better and continuing description of life in the oceans. So, I propose a worldwide census. The census would describe and explain the diversity, distribution, and abundance of marine life, especially the upper trophic levels (higher levels of the food chain). Preoccupied by possible climatic change and the reservoirs of carbon that influence it, we have tended to assess life in the oceans in gigatons of carbon, neglecting whether the gigatons are in plankton, anchovies, or swordfish. I care what forms the carbon takes.

Three questions encapsulate the purpose of the census. What did live in the oceans? What does live in the oceans? What will live in the oceans? These three questions mean the program would have three components. The first, probably not large or expensive, would reconstruct the history of marine animal populations since human predation became important, say, the past 500 years.

The second and expensive part of the program would answer "What does live in the oceans?" and would involve observations lasting a few years. We would observe the many parts of the oceans where we have so far barely glimpsed the biology, for example, the open oceans and midwaters, and would strengthen efforts by national fisheries agencies that struggle with meager funds, personnel, and equipment to examine areas near shore where many species of commercial import concentrate.

As a maximalist, I hope to see integration and synchronization of technologies, platforms, and approaches. Acoustics are paramount because every fish is a submarine, and acousticians can now interpret tiny noises 100 kilometers away. Optics also can detect much. For example, scanning airborne lasers, known as lidars, now range far, fast, and perhaps as deep as 50 meters. Lidars can also detect inexpensively if their planes are drones. And least expensive of all, smart and hungry, animals are themselves motivated samplers of their environments, and we could know what they sample if we tag them. The benefits of the technologies soar, if integrated. For example, acoustics, optics, and molecular and chemical methods can be combined to identify species reliably from afar.

Answering the third question, "What will live in the oceans?," requires the integration and formalization that we call models. So, the census would also have a component to advance marine ecosystem and other models to use the new data to explain and predict changes in populations and relations among them. A major outcome of the census would be an online three-dimensional geographical information system that would enable researchers or resource managers anywhere to click on a volume of water and bring up data on living marine resources reported in that area. Additionally, the observational system put in place for scientific purposes could serve as the prototype for a continuing diagnostic system for observing living marine resources. A proper worldwide census might cost a total of $1 billion over 10 years. Costly, complicated observational programs prudently begin with pilot projects, to test both techniques and political will.

Not only technology and stressed fisheries but also an international treaty to protect biodiversity make the time ripe for this worthy way. Biodiversity now finds itself with many signatories to its convention, but uncharted national obligations and resources. Acousticians, marine engineers, marine ecologists, taxonomists, statisticians, and others should join their talents to make the Census of Marine Life happen. In fact, some of us, supported by the Alfred P. Sloan Foundation, are trying.(2)

The Great Reversal
Humanity’s primitive hunting of the oceans has damaged marine habitats and populations. Fortunately, on the land where humanity stands, engineering and science have infused farming and logging, so initiating the "great reversal." The great reversal refers to human contraction in nature, after millennia of extension, as measured in area, square kilometers or hectares. Simple area is the best single measure of human disturbance of environment.

People transform land by building, logging, and farming (Waggoner et al., 1996). The spread of the built environment includes not only roads, shopping centers, and dwellings, but also lawns, town gardens, and parks. In the United States the covered land per capita ranges from about 2,000 square meters (m2) in states where travel is fast, like Nebraska, to less than a third as much in slower, more urban New York. The 30 million Californians, who epitomize sprawl, in fact average 628 m2 of developed land each, about the same as New Yorkers.

The transport system and the number of people in an area basically determine the amount of covered land. Greater wealth enables people to buy higher speed, and when transit quickens, cities spread. Average wealth and numbers will grow, so cities will take more land.

What are the areas of land that may be built upon? The U.S. population is growing fast, with about another 100 million people expected over the next 75 years, when the world is likely to have about 10 billion. At the California rate of 600 m2 each, the total U.S. increase would consume 6 million hectares, or about 15 percent of California. Globally, if everyone builds at the present California rate, 4 billion more people would cover about 240 million hectares, or six to seven Californias. By enduring crowding, urbanites spare land for nature. By enduring more crowding, they could spare more. Still, cities will take more land. Can changes in logging and farming offset the urban sprawl?

Declining Wood Use
Forests are cut to clear land for farms and settlements and to obtain fuel, lumber, and pulp (Wernick et al., 1997). In America, from the time of European settlement until 1900 we chopped fervidly and made Paul Bunyan a hero. Since 1900, however, America’s forested area has remained level, and since 1950 the volume of wood on American timberland has grown 30 percent. In the same interval, European forests have increased about 25 percent in volume. The intensity of U.S. wood use, defined as the wood product consumed per dollar of gross domestic product, has declined about 2.5 percent annually since 1900. In 1998 an average American consumed half the timber as a counterpart in 1900.

In the United States, the likely continuing fall in intensity of use of forest products should more than counter the effects of growing population and affluence, leading to an average annual decline of perhaps 0.5 percent in the amount of timber harvested for products. A conservative 1.0 percent annual improvement in forest growth would compound the benefits of steady or falling demand and could shrink the area affected by logging by 1.5 percent annually. Compounded, the 1.5 percent would shrink the extent of logging by half in 50 years. If one-half of this amount occurs by not cutting areas that are now cut, the area spared is 50 million hectares, one-third more than the area of California. Changing technologies, tastes, and economics create similar timberland patterns in numerous countries. Since 1990 forests have increased in 44 of 46 temperate countries, excepting the Czech Republic and Azerbaijan.

The rising productivity of well-managed forests should comfortably allow 20 percent or less of today’s forest area of about 3 billion hectares to supply world commercial wood demand in the middle of the twenty-first century (Sedjo and Botkin, 1997). Unmanaged forests now yield yearly an average of 1-2 cubic meters (m3) of commercially valuable species per hectare. The potential yield in secondary temperate forests ranges between 5 and 10 m3. Many commercial plantation forests now reliably produce more than 20 m3 per year, and experimental plots have yielded over 60 m3.

In poor regions of tropical countries such as Brazil, Indonesia, and Congo, the dominant force stressing forests remains the struggle to subsist. During the last two decades, the removal of tropical forests has been estimated at 1 percent per year. Until overcome by better livelihoods, cheap land, cheaper fuels, superior alternatives to wood in the marketplace, or taboos, the one-time conversion of forests to money, cropland, or fuel will continue. Nevertheless, global expansion of forests and rising incomes are encouraging. Indeed, in Latin America alone, about 165 million hectares (or four Californias) once used for crops and pasture have reverted to secondary forest.

This brings us to farms. For centuries farmers expanded cropland faster than the population grew, and thus cropland per person rose. Fifty years ago farmers stopped plowing up more nature per capita, initiating the great reversal (Figure 1). Meanwhile, growth in calories in the world’s food supply has continued to outpace population, especially in poor countries. Per hectare, farmers have lifted world grain yields about 2 percent annually since 1960.

Frontiers for agricultural improvement remain wide open, as average practice moves steadily toward the present yield ceiling and the ceiling itself keeps rising. On the same area, the average world farmer consistently grows about 20 percent of the corn of the top Iowa farmer, and the average Iowa farmer advances in tandem about 30 years behind the yield of his or her most productive neighbor. While an average Iowa corn farmer now grows 8 tons per hectare, top producers grow more than 20 tons, and the world average for all crops is about 2 tons. On 1 hectare the most productive farmers now make the calories for a year for 80 people; their grandparents struggled to make the calories for 3.

High and rising yields are today the fruit of precision agriculture. Technology and information help the grower use precise amounts of inputs -- fertilizer, pesticides, seed, water -- exactly where and when they are needed. Precision agriculture includes grid soil sampling, field mapping, variable rate application, and yield monitoring tied to global positioning. Precision agriculture is frugal with inputs, like other forms of lean production that now lead world manufacturing.

If, during the next 60-70 years, the world farmer reaches the average yield of today’s U.S. corn grower (less than 40 percent of today’s ceiling), 10 billion people, eating on average as people now do, will need only half of today’s cropland. The land spared exceeds the Amazonia. This sparing will happen if farmers maintain the yearly 2 percent worldwide growth of grains achieved since 1960. In other words, if innovation and diffusion continue as usual, feeding people will not stress habitat for nature. Even if the rate of global improvement falls to half, an area the size of India will revert from agriculture to woodland or other uses. A meaty U.S. diet of 6,000 primary calories per day doubles the difficulty or halves the land spared.

In summary, if an additional 4 billion people pave and otherwise develop land at the present rate of Californians, cities will consume about 240 million hectares. This area appears likely to be offset by land spared from logging in the United States and other countries that are now reducing their cutting of forests. The land likely to be spared from crops (over the time it takes to reach 10 billion people) suggests a net worldwide return to nature of lands equal to India, or more than six Californias.

On land as in the oceans, anecdotes, affection for nature, and the plight of poor farmers and loggers will impel nations to spend and prohibit. The goal of my second worthy way, verifying and forecasting the probable extent of the great reversal, is first to guide and then strengthen these actions so they will produce the hoped for conservation and restoration unalloyed by the disillusionment of failure. The distribution of lands spared will greatly affect the chances re-created for flora and fauna.

The research for the great reversal includes observations as well as experiments and analyses. In many parts of the world, routine aerial surveys of land use, confirmed by ground measurements, remain far from complete or usefully periodic. Geographers, foresters, agronomists, ecologists, agricultural and civil engineers, and technologists need to agree on definitions, protocols, and priorities for building the world land information system. The potential of intensively managed forests exemplifies the need for experiment and analysis.

Frameworks for studying the great reversal exist in the large international Global Change Program of environmental research and in joint efforts of the World Bank and World Wildlife Fund for forest conservation. These programs hunger for a feasible, attractive technical vision. Excluding costs for satellites, which I believe have already contributed what answers they can to this question, I estimate that for about $100 million we could verify the great reversal and forecast its probable extent. The information would chart a new sound and grand strategy for conserving the landscape and the other animals with which we share it.

Human Exposure Assessment
My first two ways to spend have been worthy because they would deepen our understanding of sea and land and create the context for protecting other life while we feed ourselves. My third worthy way to spend concerns what we humans absorb from the environment. Recall our high fears and outlays for ionizing radiation, pesticides, and asbestos.

Like other animals, we take in water, food, air, and dust. Given our genes, we are what we eat in the broadest sense, yet little research chronicles actual human exposures. Exposure estimates often trace back to very indirect measures, such as chimney emissions, and our habits and habitats seem overlooked. One wonders why so much exposure measurement and regulation have concentrated on traffic intersections when we spend only one hour each day outside or traveling, and are usually home sleeping (Wiley et al., 1991). Moreover, exposures to even a single chemical may occur from contact with several media (air, water), via several pathways (hand-to-mouth transfers, food), and through several routes (inhalation, oral, dermal).

In 1994, to gather information about the magnitude, extent, and causes of human exposures to specific pollutants and measure the total "dose" of selected pollutants that Americans receive, the Environmental Protection Agency (EPA) launched a National Human Exposure Assessment Survey (NHEXAS) (Journal of Exposure Analysis and Environmental Epidemiology, 1995). Its ultimate goal is documenting the status and trends of national exposure to risky chemicals both to improve risk assessments and to evaluate whether risk management helps.

For pilot studies, EPA chose metals, volatile organic compounds, pesticides, and polynuclear aromatics, because of their toxicity, prevalence in the environment, and relative risk to humans--at least as perceived by EPA and perhaps the general public. I never forget Bruce Ames’s work showing that 99.99 percent of the pesticides we ingest are natural (Ames et al., 1990). In any case, EPA’s chosen classes of compounds and the expected combination of chemicals, exposure media, and routes of exposure would demonstrate and challenge currently available analytical techniques.

Phase I of this effort, including demonstration and scoping projects, may already be the most ambitious study of total human exposure to multiple chemicals on a community and regional scale. It has focused on exposure of people to environmental pollutants during their daily lives. Several hundred survey participants wore "personal exposure monitors" to sample their microenvironments, and NHEXAS researchers measured levels of chemicals to which participants were exposed in their air, foods, water and other beverages, and in the soil and dust around their homes. They also measured chemicals or their metabolites in blood and urine provided by participants. Finally, participants completed time-activity questionnaires and food diaries to help identify sources of exposure to chemicals and to characterize major activity patterns and conditions of the home environment. Sample collection began in 1995 and went to early 1998. Publications and databases are expected soon.

The main purpose of the pilot study is to find the best way to conduct the full national human exposure assessment survey. Implementing representative monitoring projects to estimate the magnitude, duration, frequency, and the spatial and temporal distribution of human exposures for the United States will be a large task, involving chemists, biologists, statisticians, and survey researchers. I hope clever engineers can lighten, integrate, and automate the measurement processes, and speed the reporting efforts.

I learned of NHEXAS while serving for three years on the executive committee of EPA’s Science Advisory Board. NHEXAS was an unpolished diamond in a lackluster research portfolio. Neither EPA’s leadership nor the Congress appreciated the survey, so it has proceeded slowly and barely. I estimate the cost to perform NHEXAS right might be $200 million over six to seven years. I believe the United States should make a strong commitment to it, though not exactly as underway. It needs a less "toxic" bias. A national scientific conference to adjust and advance the concept might be timely.

The eventual outcomes of NHEXAS should include a comprehensive total human exposure database and models that accurately estimate and predict human exposures to environmental chemicals for both single and multiple pathways. The models would link environmental and biological data with information on human activity to estimate total human exposures to various chemicals and combinations, and thus contribute to better risk assessments. We can establish proper baselines of normal ranges of exposure and identify groups likely to be more exposed.

We know surprisingly little about our exposures. For decades researchers have measured and tracked pollutants one at a time, often faddishly. This third worthy way can reduce the uncertainty about exposure and indeed make exposure a science. Understanding aggregate exposures, we may find surprisingly powerful levers to reduce ambient bads or increase goods.

Zero-Emission Power Plants
One way to finesse the question of exposure, whether for humans or green nature, is with industries that generate zero emissions. A growing group of engineers has been promoting the concept of industrial ecology, in which waste tends toward zero, either because materials that would become waste never enter the system, or because one manufacturer’s wastes become food for another in a nutritious industrial food chain, or because the wastes are harmless. I, for one, certainly want zero emissions of poisonous elements such as lead and cadmium.

For green nature exposed outdoors, however, the major emission is carbon, and reducing this emission to zero is the purpose of my fourth worthy way. Today, industries annually emit about 6 gigatons of carbon to the atmosphere, or a ton per each of the planet’s 6 billion people. The mounting worry is that these and more gigatons likely to be emitted will make a punishing climate for nature exposed outdoors.

Most of the carbon comes, of course, from fuel to energize our economies, and an increasing portion of the energy is in the form of electricity. Since Thomas Edison, the primary energy converted to electricity has grown in two sequential, long S curves until it is now about 40 percent of all energy humanity uses. Although electric consumption leveled until recently at the top of its second S curve, I believe it will maintain an average 2-3 percent annual growth through the twenty-first century. In the information era, consumers will surely convert even more of their primary energy to electricity. And, after all, 2 billion people still have no electricity. In 2100, after a hundred years of 2-3 percent growth per year, the average per capita electricity consumption of the world’s 10 billion people or so would be raised only to today’s average U.S. per capita consumption.

To eliminate carbon emissions, I must first ask what fuel is used to generate electricity. As shares of primary energy sources have evolved from wood and hay to coal, to oil, and then to natural gas, with more hydrogen per carbon atom, the energy system has gradually and desirably been decarbonized (Ausubel, 1991). Nuclear fuel, probably, or possibly some other noncarbon alternative, will eventually close the hydrocarbon fuel era. In the interim, however, can we find technology consistent with the evolution of the energy system to economically and conveniently dispose of the carbon produced from making kilowatts? My fourth worthy way -- zero-emission power plants (ZEPPs) -- offers just that: a practical means to dispose of the carbon from generating electricity consistent with the future context.

Natural Gas First
The first step on the road to ZEPPs is to focus on natural gas, simply because it will be the dominant fuel, providing perhaps 70 percent of primary energy around the year 2030 (Ausubel et al., 1988). Although natural gas is far leaner in carbon than other fossil fuels, in 2030 it can be expected to contribute about 75 percent of total CO2 emissions.

One criterion for ZEPPs is that they must work on a big scale. In 2060, the expected peak use of 30 x 1012 m3 of natural gas would produce two to three times today’s carbon emissions. Even in 2020, we could already need to dispose of carbon from natural gas alone equal to half of today’s emissions from all fuels.

Big total use means big individual ZEPPs because the size of generating plants grows even faster than use. In concert with the overall electric system, the maximum size of power plants has grown in S-shaped pulses. One pulse, centered in the 1920s, expanded power plants from a few tens of megawatts (MW) to more than 300. After a stagnant period, another pulse centered near 1970 quadrupled the maximum plant size to about 1,400 MW. Now the world nears the end of another period of stagnation in maximum size. Engineers must prepare for increased electricity use during the next 50 years to lift the peak plant size to 5,000 MW or 5 gigawatts (GW). For reference, the New York metropolitan area now draws above 12 GW on a peak summer day.

Plants grow because large scale means lower cost, assuming the technology can cope. Crucial for controlling emissions, one big plant emits no more than many small plants but emission from one is easier to collect. We cannot solve the carbon question if we need to collect emissions from millions of microturbines.

So far, I’ve specified this worthy way as a search for big ZEPPs fueled by natural gas. But bigger ZEPPs mean transmitting immense power from larger and larger generators through a large steel axis at a speed such as 3,000 revolutions per minute (RPM). One way around the limits of mechanical power transmission may be shrinking the machinery. Begin with a very high pressure CO2 gas turbine where fuel burns with oxygen. Pressure ranging from 40 to 1,000 atmospheres would enable the CO2 to recirculate as a liquid, and the liquid combustion products could then be bled out.

Fortunately for transmitting power, the very high pressures shrink the machinery in a revolutionary way and permit very fast RPMs for the turbine. The generator could then also turn very fast, operating at high frequency, with appropriate power electronics to slow the output to 50 or 60 cycles. People have seen the attraction of higher RPMs for a while, and high-RPM generators were included in the last version of a gas turbine of the high-temperature reactor designed by General Atomics Corporation.

Materials issues lurk and solutions are expensive to test. Problems of stress corrosion and cracking will arise. The envisioned temperature of 1500?C has challenged aviation engineers for some time. Fortunately, engineers have recently reported a tough, thermally conductive ceramic strong up to 1600?C in air (Ishikawa et al., 1998).

Although combustion within CO2 does not appear to be a general problem, some difficulties may arise at the high temperatures and pressures. Also, no one has yet made burners for pressures as high as we consider. Power electronics to slow the cycles of the alternating current raise big questions, and so far, the costs of such power electronics exceed the benefits. The largest systems for conversion between alternating and direct current are now 1.5 GW and can handle 50-60 cycles. What we envision is beyond the state of the art, but power electronics is still young, meaning expensive and unreliable, and we are thinking of the year 2020 and beyond when this worthy way could make it mature, cheap, and reliable. Already engineers consider post-silicon power electronics with diamond plasma switches.

Providing the requisite oxygen for a ZEPP, say, 1,000 tons per hour for a 5-GW plant, also exceeds present capacity (about 250 tons of oxygen per hour, produced by cryoseparation), but it could be done. Moreover, a cryogenic plant may introduce a further benefit. The power equipment suppliers tend to think of very large and slow-rotating machines for high unit power, which cause problems due to the mechanical resistance of materials. With a cryogenic plant nearby, we might recur to superconductors that work in the cool context.

With a ZEPP fueled by natural gas transmitting immense power at 60 cycles, the next step is to sequester the waste carbon. Because of the high pressure, the waste carbon is, of course, already easily handled liquid CO2. In principle, aquifers can store CO2 forever if their primary rocks are silicates, which with CO2 become stable carbonates and silica (SiO2). The process is the same as rocks weathering in air. The Dutch and Norwegians have done a lot of research on CO2 injection in aquifers, and the Norwegians have already started injecting.

Opportunities for storing CO2 will join access to customers and fuel as key factors in determining plant locations. Fortunately, access to fuel may become less restrictive. Most natural gas travels far through a few large pipelines, which makes these pipelines the logical sites for generators. The expanding demand will require a larger and wider network of pipelines, opening more sites for ZEPPs.

Another criterion is overall projected plant efficiency, now reaching about 50 percent. Colleagues at Tokyo Electric Power calculate that the efficiency of the envisioned ZEPP could be 70 percent.

In short, the fourth worthy way is a supercompact (1-2 m in diameter), superpowerful (potentially 10 GW, or double the expected maximum demand), superfast (30,000 RPM) turbine putting out electricity at 60 cycles, plus CO2 that can be sequestered. ZEPPs the size of a locomotive or eventually an automobile, attached to gas pipelines, might replace the fleet of carbon-emitting nonnuclear monsters now cluttering our landscape.

We propose starting introduction of ZEPPs in 2020, leading to a fleet of 500 5-GW ZEPPs by 2050. This does not seem an impossible feat for a world that built today’s worldwide fleet of some 430 nuclear power plants in about 30 years. Combined with the oceans safely absorbing 2-3 gigatons of carbon yearly, ZEPPs, together with another generation of nuclear power plants in various configurations, can stop CO2 increase in the atmosphere near 2050, well below the doubling of atmospheric CO2 about which people worry, without sacrificing energy consumption.

Research on ZEPPs could occupy legions of academic researchers and provide new purpose to the Department of Energy’s laboratories, working on development in conjunction with private-sector companies. The fourth worthy way to spend merits tens of billions in R&D because the ZEPPs will form a profitable industry worth much more to those who can capture the expertise to design, build, and operate them. Like all my worthy ways, ZEPPs need champions.

To summarize, we have searched for technologies that handle the separation and sequestration of amounts of carbon matching future fuel use. Like the 747 jumbo jets that carry about 80 percent of passenger kilometers, compact and ultrapowerful ZEPPs could be the workhorses of the energy system in the middle of the next century.

Maglevs
Cutting emissions and the footprints of farming, logging, and power, we naturally also wonder about transport. Transport now covers Earth with asphalt ribbons and roars through the air leaving contrails that could prove harmful. With cars shifting to fuel cells fed with hydrogen over the next few decades, the air transport system and its jet fuel can become emissive enemy number one. Fortunately, the time is right for innovation in mobility, my fifth worthy way.

Since 1880, U.S. per capita mobility, including walking, has increased 2.7 percent per year, and the French about the same. Europeans currently travel at about 35 kilometers per hour and per day, because people travel about 1 hour per day. Of this, Europeans fly only about 20 seconds or 3 kilometers per day. A continuing rise in mobility of 2.7 percent per year means a doubling in 25 years, or an additional 35 kilometers per day, about 3 minutes on a plane. Three minutes per day equal about one round-trip per month per passenger. Americans already fly 70 seconds daily, so 3 minutes certainly seems plausible for the average European a generation from now. The jetset in business and society already flies a yearly average of 30 minutes per day. However, for the European air system, the projected rise in mobility would require a 14-fold increase in 25 years, or about 12 percent per year. The United States would need a 20-fold increase in 50 years. A single route that carries one million passengers per year per direction would require dozens of takeoffs and landings of jumbo jets daily. At a busy airport the jumbos would need to take off like flocks of birds. This is unlikely. We need a basic rethinking of planes and airport logistics.

The history of transport can be seen as a striving to bring extra speed in response to the progressively expanding level of income and the fixed amount of time we are willing to expose ourselves to travel (Ausubel et al., 1998). According to a rhythmic historical pattern (Figure 2), a new, fast transport mode should enter about 2000. The steam locomotive went commercial in 1824, the gasoline engine in 1886, and the jet in 1941. In 1991, the German Railway Central Office gave the magnetic levitation (maglev) system a certificate of operational readiness and a Hamburg-Berlin line is now under construction. The essence of the maglev is that magnets lift the vehicle off the track, thus eliminating friction, and that activation of a linear sequence of magnets propels the vehicle.

Maglevs have many advantages: not only high mean speed, but acceleration, precision of control, and absence of noise and vibration. They can be fully passive to forces generated by electrical equipment and need no engine on board. Maglevs also provide the great opportunity for electricity to penetrate transport, the end-use sector from which it has been most successfully excluded.

The induction motors that propel maglevs can produce speeds in excess of 800 kilometers per hour, and in low-pressure tunnels speeds can reach thousands of kilometers per hour. In fact, electromagnetic linear motors allow constant acceleration. Constant acceleration maglevs (CAMs) could accelerate for the first half of the ride and brake for the second, thus offering a very smooth ride with high accelerations.

High speed does entrain problems: aerodynamic and acoustic as well as energetic. In tunnels, high speed requires large cross sections. The neat solution is partially evacuated tubes, which must be straight to accommodate high speeds. Low pressure means a partial vacuum comparable to an altitude of 15,000 meters. Reduced air pressure helps because above about 100 kilometers per hour the main energy expense to propel a vehicle is air resistance. Low pressure directly reduces resistance and opens the door to high speed with limited energy consumption. Tunnels also solve the problem of landscape disturbance.

For a subsurface network of such maglevs, the cost of tunneling will dominate. The Swiss are actually considering a 700-kilometer system. For normal high-speed tunnels, the cross-section ratio of tunnel to train is about 10 to 1 to handle the shock wave. With a vacuum, however, even CAMs could operate in small tunnels, fitting the size of the train. In either case the high fixed cost of infrastructures will require the system to run where traffic is intense or huge currents can be created, that is, in trunk lines. Because the vehicles will be quite small, they would run very often. In principle, they could fly almost head to tail, 10 seconds apart.

Initially, maglevs will likely serve groups of airports, a few hundred passengers at a time, every few minutes. They might become profitable at present air tariffs at 50,000 passengers per day. In essence, maglevs will be the choice for future metros at several scales: urban, possibly suburban, intercity, and continental. The vision is small vehicles, rushing from point to point. Think of the smart optimizing elevators in new skyscrapers. With maglevs, the issue is not the distance between stations, but the waiting time and mode changes, which must be minimized. Stations need to be numerous and trips personalized, with zero stops or perhaps one.

Technical Considerations
Technically, among several competing designs the side-wall suspension system with null-flux centering seems especially attractive: simple, easy access for repair, and compact (U.S. Department of Transportation, 1993). Critically, it allows vertical displacement and therefore switches with no moving parts. Vertical displacement can be precious for stations, where trains would pop up and line up without pushing other trains around. It also permits a single network, with trains crossing above or below. Alternatively, a hub-and-spoke system might work. This design favors straight tubes and one change.

The suspension system evokes a comparison with air. Magnetic forces achieve low-cost hovering. Planes propel by pushing air back. Momentum corresponds to the speed of the air pushed back, that is, the energy lost. Maglevs do not push air back, but in a sense push Earth, a large mass, which can provide momentum at negligible energy cost. The use of magnetic forces for both suspension and propulsion appears to create great potential for low travel-energy cost, conceptually reduced by one to two orders of magnitude with respect to energy consumption by airplanes with similar performance.

Because maglevs carry neither engines nor fuel, the weight of the vehicle can be light and the total payload mass high. Airplanes at takeoff, cars, and trains all now weigh about 1 ton per passenger transported. A horse is not much lighter. Thus, the cost of transport has mainly owed to the vehicle itself. Maglevs might weigh 200 kilograms per passenger.

At intercity and continental scale, maglevs could provide supersonic speeds where supersonic planes cannot fly. For example, a maglev could fuse all of mountainous Switzerland into one functional city in ways that planes never could, with 10-minute travel times between major present city pairs.

Traveling in a CAM for 20 minutes, enjoying the gravitational pull of a sports car, a woman in Miami could go to work in Boston and return to cook dinner for her children in the evening. Bostonians could symmetrically savor Florida, daily. With appropriate interfaces, the new trains could carry hundreds of thousands of people per day, saving cultural roots without impeding work and business in the most suitable places.

Seismic activity could be a catch. In areas of high seismic activity, such as California, safe tubes (like highways) might not be a simple matter to design and operate.

Although other catches surely will appear, maglevs should displace the competition. Intrinsically, in the CAM format they have higher speed and lower energy costs and could accommodate density much greater than air. They could open new passenger flows on a grand scale during the twenty-first century with zero emissions and minimal surface structures.

We need to prepare a transport system that can handle huge fluxes of traffic. Growth of 2.7 percent per year in passenger kilometers traveled means not only the doubling of mobility in 25 years but a 16-fold increase in a century, which is the rational time for conceiving a transport system. The infrastructures last for centuries. They take 50-100 years to build, in part because they also require complementary infrastructures. Moreover, the new systems take 100 years to penetrate fully at the level of the consumer. Railroads began in the 1820s and peaked with consumers in the 1920s.

It is time for my fifth worthy way, to conceive in detail maglevs for America and to develop the required skills, such as tunneling. Universities should be producing the needed engineers, operations researchers, and physicists, and government should partner with industry on the prototypes. Like ZEPPs, maglevs will bring huge revenues to those who can design, build, and operate them, anywhere in the world.

Closing Remarks
A worldwide census of marine life can reawaken the adventure of the age of discovery and teach us how to spare marine habitats. A study of the great reversal of human extension into the landscape can inspire us to lift yields and spare land for nature. NHEXAS can show what we absorb and how to spare exposures. ZEPPs can generate many gigawatts without harmful emissions, sparing the climate. And maglevs can multiply our mobility while sparing air and land. These worthy ways to spend on environment and resources cohere in the vision of a large prosperous human economy that treads lightly and emits little or nothing.

Research is a vision or dream in which we, like Leonardo da Vinci, simulate a machine first in our mind. Leonardo’s powers of visualization, or one might say of experiment, were so great that his machines work, even if the letting of contracts and construction is delayed 500 years. Building machines is often costly. Dreaming is cheap. Let us start now with these five worthy ways to spend that can make dreams of improving the human condition and environment so irresistibly beautiful and true that societies, especially America, hasten to let the contracts and build the machines that can spare planet Earth -- soon instead of after a delay of 500 years.

References

 

Ames, B. N., M. Profet, and L. S. Gold. 1990. Dietary pesticides (99.99 percent all natural). Proceedings of the National Academy of Sciences USA 87:7777-7781.
Ausubel, J. H. 1991. Energy and environment: The light path. Energy Systems and Policy 15:181-188.

Ausubel, J. H., A. Gruebler, and N. Nakicenovic. 1988. Carbon dioxide emissions in a methane economy. Climatic Change 12:245-263.

Ausubel, J. H., C. Marchetti, and P. S. Meyer. 1998. Toward green mobility: The evolution of transport. European Review 6(2):137-156.

Ishikawa, T., S. Kajii, K. Matsunaga, T. Hogami, Y. Kohtoku, and T. Nagasawa. 1998. A tough, thermally conductive silicon carbide composite with high strength up to 1600?C in air. Science 282(5392):1295-1297.

Journal of Exposure Analysis and Environmental Epidemiology 5(3). 1995. Special Issue on NHEXAS.

Nierenberg, W. A. Forthcoming. The diversity of fishes: The known and unknown. Oceanography.

Sedjo, R. A., and D. Botkin. 1997. Using forest plantations to spare natural forests. Environment 39(10):14-20, 30.

Steele, J. H., and M. Schumacher. Forthcoming. On the history of marine fisheries. Oceanography.

U.S. Department of Transportation. 1993. Foster-Miller team concept. Pp. 49-81 in Compendium of Executive Summaries from the Maglev System Concept Definition Final Reports. DOT/FRA/NMI-93/02. Springfield, Va.: National Technical Information Service.

Waggoner, P. E., J. H. Ausubel, and I. K. Wernick. 1996. Lightening the tread of population on the land: American examples. Population and Development Review 22(3):531-545.

Wernick, I. K., P. E. Waggoner, and J. H. Ausubel. 1997. Searching for leverage to conserve forests: The industrial ecology of wood products in the U.S. Journal of Industrial Ecology 1(3):125-145.

Wiley, J. A., J. P. Robinson, T. Piazza, K. Garrett, K. Cirksena, Y. T. Cheng, and G. Martin. 1991. Activity Patterns of California Residents. Berkeley, Calif.: California Survey Research Center, University of California, Berkeley.


Notes
1. Thanks to Edward Frieman and William Nierenberg for hosting my visit to the San Diego Science & Technology Council, La Jolla, Calif., 9 December 1998. Thanks also to Cesare Marchetti, Perrin Meyer, and Paul Waggoner for helping develop these worthy ways over many years.

2. The Consortium for Oceanographic Research and Education in Washington, D.C., has now established an international steering committee to develop a plan for the census.

 

About the Author:Jesse H. Ausubel is director of the Program for the Human Environment at the Rockefeller University in New York City. From 1983 to 1989 he served as director of programs for the National Academy of Engineering.