In This Issue
The Bridge: 50th Anniversary Issue
January 7, 2021 Volume 50 Issue S
This special issue celebrates the 50th year of publication of the NAE’s flagship quarterly with 50 essays looking forward to the next 50 years of innovation in engineering. How will engineering contribute in areas as diverse as space travel, fashion, lasers, solar energy, peace, vaccine development, and equity? The diverse authors and topics give readers much to think about!

Confronting the Societal Implications of AI Systems: Leading Questions

Monday, March 1, 2021

Author: Lisa A. Parks

As a media scholar interested in technology’s relationship to society, I think about the ethical challenges brought about by computing and artificial intelligence (AI) tools in shaping global media culture. Three fundamental societal challenges have emerged from the use of AI.

Technological Knowledge and Literacy

Who has the power to know how AI tools work, and who does not? Issues of technological knowledge and literacy are increasingly important given ­digital corporations’ proprietary claims to information about their data collection and algorithms. The concealment or “black boxing” (Pasquale 2015) of such information by social media companies such as Facebook, for instance, keeps users naïve about AI tools and the ways those tools shape social media experiences and the information economy.

Most users learn about the AI tools used in social media platforms only inferentially or when information is leaked, as in the Facebook/Cambridge Analytica matter (Granville 2018). Digital companies protect their intellectual property to compete in the marketplace, but cordoning off technical information has huge stakes. It presents challenges not only for users who want to understand what data are being collected from them and what is done with the data, but also for researchers and policymakers who seek to explore the “back ends” of these platforms and their industrial, behavioral, and juridical implications.

In the United States, digital corporations’ intellectual property rights supersede consumers’ right to know about the AI tools they encounter online every day. In Europe, the General Data Protection Regulation has attempted to address these issues obliquely by defining “data subject rights,” yet to defend these rights it is essential for regulators and the public to know how AI tools are designed to work and to understand any potential for them to act autonomously. In fact, some inventors of AI tools do not even understand how their own systems work.

AI in International Relations and Globalization

How do AI tools intersect with international relations and the dynamics of globalization? As AI tools are operationalized across borders, they can be used to destabilize national sovereignty and compromise human rights. Concerns about this have emerged, for instance, in the contexts of US drone wars (Braithwaite 2018) and ­Russian interference in the 2016 US presidential election (Jamieson 2018).

Meanwhile, think tanks and nonprofits celebrate the potential of AI tools to accelerate global development. The Global Algorithmic Institute ( is dedicated to developing algorithms that “serve the international public good,” focusing on “global financial stability through implementation of big data and dynamic algorithms.” And AI Global (https://­ suggests that there are “limitless ­opportunities to provide better access to critical services like healthcare, banking, and communication.”

Given these contradictions, this area of concern might be addressed by specifying which countries or regions have the resources to innovate and contribute to AI technologies and industries, and which are positioned as recipients, subjects, or beneficiaries. What do the vectors of the global AI economy look like? Who are the dominant players? Where are their workforces located and what labor tasks are they performing? What are the top-selling AI tools, and how do their supply chains correlate with historical trade patterns, geo­politics, or conditions of disenfranchisement?

Given the power of AI tools to impact human behavior and shape international relations, it is vital to conduct political and economic analysis of the technology’s relation to global trade, governance, natural environments, and culture. This involves adopting an infrastructural disposition and specifying AI’s constitutive parts, processes, and effects as they take shape across diverse world contexts. Only then can the public understand the technology well enough to democratically deliberate its relation to ethics and policy.

AI and Social Justice

What is the relationship between AI and social justice? Will new AI and computing technologies reinforce or challenge power hierarchies organized around social differences such as race/ethnicity, gender/sexuality, national identity, and so on?

Researchers are advancing important projects in this area (e.g., Joy Buolamwini’s Algorithmic Justice League,; Costanza-Chock 2020), exploring how social power and bias are coded into computational systems, and challenging people to confront structural inequalities, such as racism and sexism, when using or designing AI systems. Their work suggests that social justice should be core to AI innovation.

If AI tools are designed in the United States—­whether in Silicon Valley or in Cambridge, ­Massachusetts—by predominantly white, middle-class people who approach technical innovation as separate from questions of social justice, then AI products are likely to implicitly reproduce the values and worldviews of those with privilege.

Algorithmic bias occurs when the design process is divorced from critical reflection on the ways social hierarchies impact technological development and use. Arguably, all algorithms are biased to a certain degree, but as AI tools proliferate, it is important to consider who or what is best equipped to detect and remedy bias, particularly given the potential of these tools to ­reinforce, intensify, or create new inequalities or injustices.

Given ongoing demonstrations against systemic racism in the United States and other countries, there is a need for technology design workshops that foster candid discussions of the ways power hierarchies and social inequalities shape pathways into computing and engineering as well as the technology design process. Designers need a more robust understanding of social justice challenges, including the struggles against systemic racism and sexism, to be able to build AI systems that resonate with diverse users.

Requiring technology developers to do some of their work beyond the lab context, where they can engage with highly variable socioeconomic conditions across diverse international contexts, may generate more equitable and ethical AI systems.

Looking into the Future

If economic models are any indication, artificial intelligence is on a fast growth path and is projected to add $13 trillion in global economic activity by 2030 (Bughin et al. 2018). For AI design to support principles of technological literacy, international relations, human rights, and social justice, engineers and computer scientists can ensure the inclusion of humanists and social scientists from diverse backgrounds in AI research and development, and recognize the value of multidisciplinary perspectives when designing and building machines with such profound impacts. The most important guiding principle may be to work toward an AI future that prioritizes social bonds and public interests over profit, expansion, and influence.


Braithwaite P. 2018. Google’s artificial intelligence won’t curb war by algorithm. Wired, Jul 4.

Bughin J, Seong J, Manyika J, Chui M, Joshi R. 2018. Notes from the AI frontier: Modeling the impact of AI on the world economy. Discussion Paper, Sep 4. New York: ­McKinsey Global Institute.

Costanza-Chock S. 2020. Design Justice. Cambridge: MIT Press.

Granville K. 2018. Facebook and Cambridge Analytica: What you need to know as fallout widens. New York Times, Mar 19.

Jamieson KH. 2018. Cyberwar: How Russian Hackers and Trolls Helped Elect a President: What We Don’t, Can’t, and Do Know. Oxford: Oxford University Press.

Pasquale F. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge MA: Harvard University Press.


Without considering questions of social justice, AI products are likely to implicitly reproduce the values and worldviews of those with privilege.

About the Author:Lisa Parks is ­director of the Global Media Technologies and Cultures Lab and ­professor of comparative media studies/writing and science, technology, and society, all at MIT.