Ethical Implications of Computational Modeling

At their core, science, engineering, and all forms of inquiry involve making sense of a complex world. Modeling is one way of reducing the complexity of the world by abstracting away details.

Computational modeling involves the use of computers to scale up mathematical models.1 Computational models play a critical role in a number of application areas. For example, agricultural engineers use them to control invasive species (Büyüktahtakin et al. 2014), and climatologists use them to predict changes in the Earth’s climate over time (Edwards 2010). In tennis the “Hawk-Eye” instant replay system is used to resolve challenges to refereeing: unlike instant replay used in most sports, it employs a computational model to track the ball using multiple camera angles and makes a prediction about whether a ball was in or out (Collins and Evans 2008).

In each of these cases, computational modeling carries important ethical implications, as modelers have tremendous power to influence both perceptions and behavior, particularly when models are not transparent (Fleischmann and Wallace 2005).

Background: Stakeholders and Power Inequities

To understand the ethical implications of computational modeling, it is first important to understand the key stakeholders involved in the computational modeling process. Modelers build models to specifications dictated by clients, for use by end users, and use of the model also affects others indirectly.

There are often power inequities, as clients and modelers may not be sensitive to the needs and values of end users and those affected. End users may not be able to influence or even understand how models work, and those affected may not even realize that a computational model is being used.

For example, in the case of Hawk-Eye, the computational model was built for the use of International Tennis Federation officials, whose decisions affect both the players and the tennis public. The public and often the players themselves are unaware that a computational model is being used, and ultimate authority is ceded to the reconstruction produced by that model, with no attention given to the degree of error in the model’s calculations—rather, the call is treated as infallible, unlike the human chair and line umpires (Collins and Evans 2008).

People thus place blind trust in a technology without a full understanding—or even awareness—of the technology or how it works (or, in some cases, does not work). This creates the possibility of being subjected to technocracy, where people are forced to blindly trust technology and those who create it, a significant threat to democracy and individual autonomy.

Role of Human Values in Computational Modeling

This paper synthesizes research findings from a three-year field study of the role of human values in computational modeling (Fleischmann and Wallace 2006, 2009, 2010; Fleischmann et al. 2010, 2011a,b, 2017). The study involved field research at three computational modeling laboratories—corporate, academic, and governmental—in 2006–2008. We visited each site three times: a short visit to conduct information sessions, a longer visit to conduct interviews, and a short visit to present our results and conduct confirmatory focus groups. Data collection was based on 76 surveys completed and interviews with 40 of the respondents. We analyzed the interview data using thematic analysis (Braun and Clarke 2006) based in part on the Schwartz (1994) Value Inventory of 56 basic human values. We received IRB approval before starting the research, and all participants were given the option to review and modify their interview transcripts.

To synthesize the study findings, we discuss the roles of value conflicts (Fleischmann and Wallace 2006, 2010; Fleischmann et al. 2011a), transparency (Fleisch-mann and Wallace 2005, 2009; Fleischmann et al. 2011c), and professional and organizational cultures, particularly professional codes of ethics (Fleischmann et al. 2010, 2011b, 2017). We then apply these findings to shed new light on the broader ethical challenges raised by computational models (Wallace and Fleisch-mann 2015).

Value Conflicts

What is the relationship between values and computational models? Pielke (2003) distinguishes between the modeling process and models as products. Building on the work of Winner (1986), Johnson (1997) argues that values can be embedded in technology. We found values that were embedded in the models themselves and values that arose during the modeling process, as well as examples of values’ influence on the success of both the modeling process and models as products (Fleischmann and Wallace 2006).

Values are not typically exclusive—that is, held by some but not by others. In travel safety, for example, there is a value conflict between security and convenience. It’s not that some people value security while others do not, and that some people value convenience while others do not. Rather, the issue is in how people make tradeoffs between these values, and the extent to which people value one versus the other. We found that value conflicts played an important role both in the modeling process and in models as products (Fleisch-mann and Wallace 2010).

Conflicts in the Modeling Process

We found that the modeling process hinged largely on value conflicts such as honesty versus obedience, innovation versus reliability, and timeliness versus completeness (Fleischmann and Wallace 2010).

Honesty versus obedience involves the choice modelers had to make in terms of giving the most accurate answer versus the answer desired by the client. Modelers who follow the creed of “the customer is always right” run the risk of violating the ethical codes of their profession, while those who follow the creed of “honesty is the best policy” may risk losing (some) clients. This value conflict arose most frequently in the corporate laboratory (Fleischmann and Wallace 2010).

Innovation versus reliability involves the inherent conflict between modelers as scientists and modelers as technicians. The incentive structure for scientists revolves around doing things differently in a quest to learn new things, often at the cost of reliability, usability, or other markers of technical success. In contrast, the incentive structure for technicians involves making sure that things work, typically to enable the work of others. Computational modeling can either be bleeding edge or quite robust, but rarely both. This value conflict arose most frequently in the academic laboratory (Fleisch-mann and Wallace 2010).

Timeliness versus completeness, related in some ways to the two conflicts above, involves the competing pressures to finish a job on schedule or to be thorough in one’s work. Obviously, neither extreme is particularly productive, as something that is extremely timely but also very incomplete is not particularly useful and can instead be quite misleading and even dangerous; on the other hand, it is possible to tweak and refine a model to the point that it is never actually put into use. Most people can agree both that “haste makes waste” and that “perfect is the enemy of good.” However, exactly where one draws that line varies tremendously, based on contextual factors and individuals’ different valuations of these approaches. This value conflict arose most frequently in the government laboratory (Fleischmann and Wallace 2010).

Conflicts in Models as Products

Salient value conflicts connected to models as products include the goals of the product versus the goals of the organization, publication versus customer needs, and listening to clients versus listening to users (Fleischmann and Wallace 2010).

Conflicts of the goals of the product versus the goals of the organization are most likely in an organization with strong top-down directives. For example, the goal of a product designer might be to maximize effectiveness subject to a budget constraint, whereas the organization seeks to produce the cheapest product subject to an acceptable level of effectiveness. In this way, the degree of autonomy (or the lack thereof) experienced by modelers can become embedded in models as products. This value conflict arose most frequently in the corporate laboratory (Fleischmann and Wallace 2010).

Publication versus customer needs is related to the value conflict of innovation versus reliability. Choosing an off-the-shelf model is more likely to result in a reliable product, but unlikely to yield a significantly innovative product that can allow the modelers to publish. Further, publication requires significant time and effort, and this time may be taken away from addressing customer needs. This value conflict arose most frequently in the academic laboratory (Fleischmann and Wallace 2010).

Listening to clients versus listening to users involves the inherent value conflicts among the various stakeholder groups involved in the modeling process. Listening to those affected could also be a consideration, but in the modeling contexts that we studied the emphasis rarely progressed beyond the client and/or user. One salient example of this client/user conflict was the confusion and frustration experienced by some of the computational modelers in the corporate laboratory. The modelers, tasked by their clients with building models to create efficiencies and reduce workforce needs, were vexed that users were unwilling to participate in design efforts to this end. This example demonstrates the importance of modelers’ efforts to consider the perspectives of all stakeholders in the modeling process—not just the clients but also users and those affected. While this conflict was common in the corporate laboratory, it arose most frequently in the government laboratory, which had a very hierarchical structure that resulted in significant power distances among stakeholders (Fleisch-mann and Wallace 2010).

Values That Reduce Conflicts

We analyzed the survey data and identified 11 values that were positively correlated with a reduction in value conflicts (Fleischmann et al. 2011a). We used three questions to ask about value conflicts between the modeler’s organization and the three stakeholder groups (clients, users, and those affected):

  1. What values are positively or negatively correlated with value conflicts with clients?
  2. What values are positively or negatively correlated with value conflicts with users?
  3. What values are positively or negatively correlated with value conflicts with those affected by models?

The value of equality was statistically significantly correlated with the responses to all three questions, indicating that modelers who valued equality experienced fewer value conflicts among these diverse stakeholder groups (Fleischmann et al. 2011a).

Other values identified as statistically significant in one or two of the pairings were a spiritual life, a world at peace, devout, forgiving, honoring of parents and elders, humble, politeness, responsible, self-discipline, and true friendship. We were not able to identify any values that were negatively correlated with a reduction in value conflicts (Fleischmann et al. 2011a).

Modelers who highly valued equality and other values were less likely to observe value conflicts between their organization and other stakeholder groups, although based on the data it is not clear whether these values actually helped them to reduce value conflicts or simply made them oblivious to them (Fleischmann et al. 2011a). Based on our interview data, we are inclined to conclude the former, at least in many cases, as obliviousness to value conflicts appeared to occur most frequently among the least egalitarian modelers (Fleischmann and Wallace 2010).

Transparency

Using the literature on value-sensitive design, Friedman and Kahn (2008) identify 12 human values with ethical import: human welfare, ownership and property, privacy, freedom from bias, universal usability, trust, autonomy, informed consent, accountability, identity, calmness, and environmental sustainability. Our study identified another important value, transparency (Fleischmann and Wallace 2009).

Transparency can be seen as a component of informed consent, as users must understand how a system works to actually achieve autonomy in cases where they are given a choice. However, transparency goes beyond cases in which a user is asked to decide something; the user may want to understand how something works even if he is powerless to influence what is done as a consequence. For example, investors may want to understand the workings of the financial markets even though their particular contribution is too small to have any noticeable effect on the market (Fleischmann and Wallace 2009).

Reasons to Make Models Transparent

We identified three reasons for making models transparent: political, economic, and legal. The political dimension largely revolves around reducing power inequities and avoiding technocratic situations where the end user is at the mercy of the modeler (to say nothing of those affected, who again may not even be aware that a model is being used). Transparency can have economic benefits, insofar as it may promote customer satisfaction, and in some cases it may reveal corruption in an organization that might otherwise go undetected. Modelers also reported that transparency can be a legal requirement, when there is a need to explain the basis for decisions (Fleischmann and Wallace 2009).

Ways to Make Models Transparent

Building on Willemain (1995), we identified five ways that modelers can make models transparent:

  1. Transparency should be considered early in the design of the model.
  2. The modeling paradigm is important, as some paradigms lend themselves better to transparency (e.g., Bayesian models) than others (e.g., neural networks).
  3. Transparency should lead to better understanding, not just a data dump. As a literal example, making the shell of a computer transparent is unlikely, by itself, to teach the average user anything meaningful about how the computer works.
  4. Transparency is critical in the evaluation of the model, especially if it is done by someone other than the modeler or there is a goal (or need) to achieve accountability.
  5. Transparency is obviously critical when the model is released into the world, in relation to the three reasons for transparency cited above (Fleischmann and Wallace 2009).

Because transparency often comes into conflict with other values, such as accuracy, we designed a teaching case based on this conflict. In the context of an imagined future mission to Mars, modelers (in training) must make a choice between a model that is 99 percent accurate but a black box or 95 percent accurate but transparent. Someone who values autonomy would likely favor the transparent model, as it would empower the user to determine when the model might be inaccurate, while someone who values paternalism might prefer to ensure that the user blindly puts their trust in the most accurate model (Fleischmann et al. 2011c).

Professional Codes of Ethics

In the computing professions, codes of ethics typically serve as vehicles for socialization and education rather than rigid rules that enforce compliance (Anderson et al. 1993). To better understand how modelers themselves perceive codes of ethics, we asked them about their awareness of, familiarity with, and adherence to codes of ethics (Fleischmann et al. 2017).

Values that Correlate with Ethics Code Awareness, Familiarity, and Adherence

We found nine values that correlated with awareness of a code of ethics; only freedom was negatively correlated, while the other eight were positively correlated. Similarly, for familiarity with codes of ethics, ten values were correlated, with nine positively correlated and freedom, again, negatively correlated. Finally, for adherence to codes of ethics, 13 values were correlated, all positively; the most significantly correlated were equality (p < 0.001) and social justice (p < 0.001) (Fleischmann et al. 2017).

Thus only those who valued freedom were less likely to be aware of or familiar with codes of ethics, while those who valued equality and social justice were particularly likely to adhere to codes of ethics. This finding appears to be in keeping with the motivations for codes of ethics (Fleischmann et al. 2017).

Thematic Analysis of Interview Data

Our thematic analysis of the interview data shed further light on the quantitative findings. One of the two orthogonal dimensions of the Schwartz (1994) Value Inventory ranges from self-enhancement to self-transcendence. One modeler asserted that following a code of ethics was best for both the modeler and everyone else, an observation that helps to explain why we found that positive attitudes toward and experiences with codes of ethics were positively correlated with both self-enhancement values (e.g., authority, preserving my public image) and self-transcendence values (e.g., equality, social justice) (Fleischmann et al. 2017).

The thematic analysis also supported (1) Mason’s (1994) covenants with reality (i.e., represent the problem with as much fidelity as possible) and values (i.e., suggest improvements for the client that adhere to the client’s values), and (2) our covenant with transparency (Fleisch-mann and Wallace 2005) by ensuring that models are opaque to all stakeholders (Fleischmann et al. 2017).

Another important theme was that codes of ethics should be bottom-up rather than top-down, to better reflect the perspectives and experiences of modelers in general (Fleischmann et al. 2017).

Based on our analysis, we conclude that individuals who value universalism and benevolence have a duty to advocate for awareness of, familiarity with, and adherence to codes of ethics (Fleischmann et al. 2017).

Conclusions

Modeling carries important ethical implications for research, education, and applications (Wallace and Fleischmann 2015). Gaps in the transparency of computational models (at least for the general public), for example, can inflame controversies surrounding scientists’ use of them.

A prominent instance of such controversy was “Climategate,” which involved the illegal release of email exchanges among climate scientists working at the University of East Anglia (Martin 2014). The climate change debate is politically fraught, and the media and the public misunderstood (or, in some cases, likely misrepresented) the scientists’ exchange, which was difficult to counteract given the difficulty of making climate change models understandable to the general public.

Computational modelers must be aware of the important societal role of their models, whether they are used for officiating tennis matches or formulating policy recommendations that can shift national and international energy policies. A model necessarily abstracts and reduces from reality, and is the product of a complex process including the modelers as well as (potentially) other stakeholders. Thus, the goal is not to have a “perfect” or “value-free” model—no model of a natural system can completely, perfectly capture the complexity of reality—but rather to be as transparent as possible about both the modeling process and the (known and potential) limitations of the model relative to reality.

The overall argument here is not that modelers should be better people. Although it is certainly important to attract people of strong character to the modeling profession and to reward and retain these individuals, it seems likely that the vast majority of modelers do have the explicit intent of making the world a better place through their work. (And in any event, it is challenging to deter would-be troublemakers from causing ill through their models, particularly given the lack of enforceability of codes of ethics.)

The critical challenge is to ensure that modelers consider design decisions from multiple perspectives, so that they not only take account of their own values but are sensitive to those of their users (Friedman et al. 2006; Friedman and Kahn 2008). Educational interventions can be effective in achieving this goal, such as multirole interactive cases that illustrate the interconnectedness and interdependency of ethical decision making (Fleisch-mann et al. 2009, 2011c,d,e).

Engineers in general, and modelers in particular, need to appreciate diversity of perspectives and ensure that they are not practicing technological somnambulism (Winner 1986). They must be aware of the ethical implications of their decisions and ensure that they are enabled and empowered to make conscious choices rather than acting by default (Fleischmann 2010, 2014).

Acknowledgments

This material is based in part on work supported by the National Science Foundation under grant nos. 0521834, 0646404, 0731717, and 0731718. Thanks go to Justin Grimes, Cindy Hui, Adrienne Peltz, and Clay Templeton for their help with data processing for the publications cited and discussed here, as well as all of our anonymous participants who generously donated their time to further this research. Thanks also go to Deborah Johnson and Gerry Galloway for their helpful advice and feedback on earlier drafts.

References

Anderson RE, Johnson DG, Gotterbarn D, Perrolle J. 1993. Using the new ACM Code of Ethics in decision making. Communications of the ACM 36(2):98–107.

Braun V, Clarke V. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3:77–101.

Büyüktahtakin IE, Feng Z, Olsson AD, Frisvold G, Szidarovszky F. 2014. Invasive species control optimization as a dynamic spatial process: An application to Buffelgrass (Pennisetum ciliare) in Arizona. Invasive Plant Science and Management 7(1):132–146.

Collins HM, Evans R. 2008. You cannot be serious! Public understanding of technology with special reference to “Hawk-Eye.” Public Understanding of Science
17:283–308.

Edwards PN. 2010. A vast machine: Computer models, climate data, and the politics of global warming. Cambridge MA: MIT Press.

Fleischmann KR. 2010. Preaching what we practice: Teaching ethical decision-making to computer security professionals. Lecture Notes in Computer Science 6054:197–202.

Fleischmann KR. 2014. Information and Human Values. San Rafael CA: Morgan & Claypool.

Fleischmann KR, Wallace WA. 2005. A covenant with transparency: Opening the black box of models. Communications of the ACM 48(5):93–97.

Fleischmann KR, Wallace WA. 2006. Ethical implications of values embedded in computational models: An exploratory study. Proceedings of the 69th Annual Meeting of the American Society for Information Science and Technology, Milwaukee.

Fleischmann KR, Wallace WA. 2009. Ensuring transparency in computational modeling. Communications of the ACM 52(3):131–134.

Fleischmann KR, Wallace WA. 2010. Value conflicts in computational modeling. Computer 43(7):57–63.

Fleischmann KR, Robbins RW, Wallace WA. 2009. Designing educational cases for intercultural information ethics: The importance of diversity, perspectives, values, and pluralism. Journal of Education for Library and Information Science 50(1):4–14.

Fleischmann KR, Wallace WA, Grimes JM. 2010. The values of computational modelers and professional codes of ethics: Results from a field study. Proceedings of the 43rd Hawai’i International Conference on Systems Sciences, Kauai.

Fleischmann KR, Wallace WA, Grimes JM. 2011a. How values can reduce conflicts in the design process: Results from a multi-site mixed-method field study. Proceedings of the 74th Annual Meeting of the American Society for Information Science and Technology, New Orleans.

Fleischmann KR, Wallace WA, Grimes JM. 2011b. Computational modeling and human values: A comparative study of corporate, academic, and government research labs. Proceedings of the 44th Hawai’i International Conference on Systems Sciences, Kauai.

Fleischmann KR, Koepfler JA, Robbins RW, Wallace WA. 2011c. CaseBuilder: A GUI web app for building interactive teaching cases. Proceedings of the American Society for Information Science and Technology 48(1):1–4.

Fleischmann KR, Robbins RW, Wallace WA. 2011d. Information ethics education in a multicultural world. Journal of Information Systems Education 22(3):191–202.

Fleischmann KR, Robbins RW, Wallace WA. 2011e. Collaborative learning of ethical decision-making via simulated cases. Proceedings of the 6th Annual iConference, Seattle.

Fleischmann KR, Hui C, Wallace WA. 2017. The societal responsibilities of computational modelers: Human values and professional codes of ethics. Journal of the Association for Information Science and Technology 68(3):543–552.

Friedman B, Kahn PH Jr. 2008. Human values, ethics, and design. In: The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications, 2nd ed. Sears A, Jacko JA, eds. New York: Lawrence Erlbaum Associates.

Friedman B, Kahn PH Jr, Borning A. 2006. Value sensitive design and information systems. In: Human-Computer Interaction and Management Information Systems: Foundations. Zhang P, Galletta D, eds. Armonk NY: ME Sharpe.

Johnson DG. 1997. Is the global information infrastructure a democratic technology? SIGCAS Computers and Society 27(3):20–26.

Martin B. 2014. The Controversy Manual. Sparsnäs, Sweden: Irene Publishing.

Mason RO. 1994. Morality and models. In: Ethics in Modeling. Wallace WA, ed. Tarrytown NY: Elsevier Science.

Pielke RA Jr. 2003. The role of models in prediction for decision. In: Models in Ecosystem Science. Canham CD, Cole JJ, Lauenroth WK, eds. Princeton University Press.

Schwartz SH. 1994. Are there universal aspects in the structure and contents of human values? Journal of Social Issues 50(4):19–45.

Wallace WA, Fleischmann KR. 2015. Models and modeling. In: Ethics, Science, Technology, and Engineering: A Global Resource. Holbrook B, Mitcham C, eds. Farmington Hills MI: Gale Cengage Learning.

Willemain TR. 1995. Model formulation: What experts think about and when. Operations Research 43:916–932.

Winner L. 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology. University of Chicago Press.


FOOTNOTES

1 Nature.com defines computational models as “mathematical models that are simulated using computation to study complex systems. In biology, one example is the use of a computational model to study an outbreak of an infectious disease such as influenza. The parameters of the mathematical model are adjusted using computer simulation to study different possible outcomes.” Available at www.nature.com/subjects/computational–models.

About the Author: Kenneth R. Fleischmann is an associate professor in the School of Information at the University of Texas at Austin. William A. Wallace is Yamada Corporation Professor in the Department of Industrial and Systems Engineering at Rensselaer Polytechnic Institute.