Download PDF Fall Bridge on the Value Proposition in Innovative Engineering September 22, 2023 Volume 53 Issue 3 This issue explores the unique value proposition that engineers and engineering disciplines present in addressing the National Academies’ Grand Challenges. Covering topics ranging from the global sustainability challenge to the sequestration of carbon to transformations in our water management system, the articles in this issue show how engineers are vital to creating a world in which humanity can thrive. A Word from the NAE Chair Intelligent Systems Engineering? Thursday, September 28, 2023 Author: Donald C. Winter No matter how you get your news, it seems that artificial intelligence (AI) is ever present, whether it is a report on ChatGPT’s ability to pass the bar exam,[1] a flawed legal brief generated by ChatGPT citing nonexistent cases,[2] or Congress’s latest attempt to regulate the AI industry.[3] While the popular press tends to focus on aspects of AI that are relatively easy to convey to the general public, they fail to report on the technology behind AI that has the potential to precipitate societal changes on the scale of the Industrial Revolution.[4] Perhaps not surprisingly, the headlines tend to focus on matters such as the forecasts for significant losses of jobs resulting from future applications of AI,[5] not on the potential benefits. All this at a time when AI is enabling incredible new products such as autonomous vehicles and radically changing the capabilities of network platforms such as Facebook.[6] All of this has motivated me to ruminate about how AI will change the future of engineering, particularly systems engineering, a discipline that I’ve practiced for the vast majority of my professional career. The disciplined use of the systems engineering process is key to the successful implementation of what are termed “megaprojects”: complex efforts that involve thousands of engineers and billions of dollars to address major societal issues such as climate change. Some likely applications of AI are fairly evident and can impact the systems engineering process relatively soon. For example, the use of AI to assist in software coding[7] has the potential to facilitate the development of system models and simulations that are key to the evaluation of alternate design concepts. Other aspects associated with the selection of a suitable design concept and the development of that design are much harder to assess given that AI, while it is evolving, is still in its infancy. For that reason, I will avoid making predictions on such aspects but simply pose questions in the hope of initiating a dialogue on the topic. One of the unique aspects of the systems engineering process is the development of a suitable response to the objectives of what is often a very diverse group of stakeholders. When I taught the systems engineering process at the University of Michigan, I used a case study of the Wilson Bridge replacement, which carries I-95 and the Capital Beltway over the Potomac River.[8] It took roughly a decade to develop an acceptable design concept due to the diversity of stakeholders and their differing objectives. Besides the commuters and long-haul truckers who would use the bridge, there were boaters who needed to pass under the bridge, multiple jurisdictions providing funding, local residents concerned about construction and aesthetics, and a variety of environmental groups with differing agendas and priorities. Furthermore, the various considerations and priorities cannot be expressed as common units (such as dollars), let alone be optimized using a merit function. It took human negotiations backed up by exhaustive analyses of alternatives to develop an acceptable solution. Can the AI technology of the future address such challenges? Safety in applications such as autonomous vehicles is another critical matter. If a system concept is to be identified by AI, can the AI address the inevitable safety tradeoffs in a manner acceptable to humans? For example, might AI accept loss of life during construction or operation of a new low or zero CO2 emission power plant, based on predictions of the potential long-term global impact of reduced CO2 in the atmosphere? Would such a tradeoff be deemed morally correct? How can an amoral technology make such decisions to satisfy humans? These questions can become material when the consequences are adjudicated in a court of law. Who is held culpable and/or liable for the AI decision; or is the AI output deemed to be merely a recommendation for consideration by human engineers and program managers? One aspect of AI that I find to be particularly concerning is the limited insight the user receives regarding the rationale for its output. Such insight could, for example, inform assessments of the robustness of the system concept selection. That is, to what extent will the preferred concept remain valid if changes are made to stakeholder objectives or statutory requirements? If the AI product is viewed as just one of several alternatives to be assessed and adjudicated by human systems engineers, the loss of insight may be mitigated. But if the AI-proffered solution is selected, any failure to understand the genesis of that solution can contribute to an intellectual separation between the human systems engineers and the systems engineering product. Unfortunately, such separation is not new. The power of many computational tools in use today is such that they can provide seemingly accurate answers without the users’ understanding of how the answer was arrived at or what uncertainties were associated with it. If future systems engineers lack a basic understanding of the system concept under their responsibility, what might the consequences be? How can AI evolve to better inform the user of the why behind the AI product and facilitate the development of a team, with the human aided and augmented by AI and not marginalized by it? As I noted earlier, given the early stage of the development of AI technologies, it is unclear whether these questions will be addressed in the next few years with product evolution or whether they will need to be addressed by the way in which AI is incorporated into the systems engineering process. AI is so powerful that its incorporation into systems engineering activities can be regarded as inevitable. The fundamental question that remains is how to do so to maximize our ability to leverage AI and minimize our regrets. That question is akin to a classic engineering problem. [1] https://www.cnet.com/tech/chatgpt-can-pass-the-bar- exam-does-that-actually-matter/ [2] https://www.legaldive.com/news/chatgpt-fake-legal- cases-generative-ai-hallucinations/651557/. [3] https://www.npr.org/2023/05/15/1175776384/congress- wants-regulate-ai-artificial-intelligence-lot-of-catching- up-to-do [4] Henry A. Kissinger et al. 2021. The Age of AI: And Our Human Future. New York: Little, Brown and Company. [5] https://www.cnbc.com/2023/03/28/ai-automation-could- impact-300-million-jobs-heres-which-ones.html [6] https://ai.meta.com/ [7] https://www.infoworld.com/article/3699140/review- codewhisperer-bard-and-copilot.html [8] http://www.roadstothefuture.com/Woodrow_Wilson_Bridge. html About the Author:Donald C. Winter, chair, NAE