In This Issue
Winter Bridge on Frontiers of Engineering
December 15, 2023 Volume 53 Issue 4
This issue features articles by 2023 US Frontiers of Engineering symposium participants. The articles cover pressing global issues including resilience and security in the information ecosystem, engineered quantum systems, complex systems in the context of health care, and mining and mineral resource production.

From Accidental Rumors to Pervasive Disinformation A Decade of Misinformation Research

Wednesday, December 13, 2023

Author: Kate Starbird

Over the past decade, mis- and disinformation have become increasingly prevalent within social media platforms and across the broader information ecosystem.

At the University of Washington, my colleagues and I have been studying online rumoring for more than a decade. Rumors are unofficial stories spreading through informal channels. There’s a long history of social science research on rumors and online rumoring (e.g., Allport and Postman 1945; Kapferer 2017; Shibutani 1966). That work describes rumors as a natural byproduct of the collective sensemaking process that occurs during times of crisis as people attempt to cope with uncertain information under anxiety. Rumors, in that view, serve informational, emotional, and social purposes. Importantly, while many rumors are false, some turn out to be true.

The concept of rumors is valuable for understanding how false or uncertain information spreads online, but other terms can be useful for highlighting aspects of veracity and intent. Misinformation is information that is false, but not necessarily intentionally so. Disinformation is false or misleading information that is purposefully seeded and/or spread for a specific objective (e.g., financial, political, or reputational). Over the past decade, mis- and dis­information have grown and metastasized within social media platforms and across the broader information ecosystem.

The Boston Marathon Bombings: Clickbait Rumors, Sensemaking Rumors, and Conspiracy Theories

Together with events from Hurricane Sandy a few months prior, the Boston Marathon bombings of 2013 marked a tragic inflection point in the evolution of how people used social media during crisis events and a point of recognition of how collective sensemaking could go awry in online environments.

Using application programming interfaces (APIs) that were publicly available at that time, my team collected and analyzed about 10 million tweets related to the bombings (Maddock et al. 2015; Starbird et al. 2014). We employed a combination of network graphing, temporal visualizations, and manual analysis of tweet content to identify several rumors spreading in that conversation. We also identified three salient types of online rumors: clickbait rumors, sensemaking rumors, and conspiracy theories.

Clickbait rumors are cases where people purposefully spread sensational (and false) claims. In 2013, bad actors were beginning to recognize that they could leverage events to gain visibility through sensational claims, often accompanied by images. The attention they captured could lead to instant cash or lasting reputational gains.

Starbird figure 1.gifIn online sensemaking rumors, seemingly well-­meaning people attempt to make sense of a dynamic and anxiety-producing event. After the bombings, there was considerable activity around trying to identify the culprits, and in several cases, the crowd got it wrong—falsely accusing innocent people. Often, the mistakes were accidental. But in some cases, the sensemaking process was purposefully manipulated. After the event, there was a moment of collective reflection as people recognized that digital volunteerism could quickly shift to digital vigilantism (Madrigal 2013).

A conspiracy theory is an explanation of an event or situation that suggests it resulted from a secret plot or conspiracy orchestrated by sinister forces. Online conspiracy theories are a corrupted form of sensemaking, where the theory—that the event was orchestrated by secret ­forces—is predetermined, and the audience works to assemble evidence to fit that theory. New conspiracy theories often rely upon a set of tropes or common story elements from past ones. In the wake of the Boston Marathon bombings, conspiracy theories claimed that the “real” perpetrators were Navy Seals or other US government agents.

Examining the “temporal signatures” (tweet volume over time) of the rumors revealed a couple of interesting features. First, the number of tweets spreading a rumor was almost always far higher than the volume of tweets correcting it. Second, conspiracy theories looked different from other rumors. Clickbait and sensemaking rumors took off quickly and then faded with a rapid, exponential decay. But conspiracy theories didn’t decay in the same way. They persisted for weeks and months after the event and would repeatedly resurface over time.

Finally, conspiracy theory rumors had a high “domain diversity” (Maddock et al. 2015)—a measure of the distribution of websites that were linked to within tweets. In other words, there were far more domains cited, and they were a whole lot quirkier than the domains cited in the other rumors.

Conspiracy Theories as a Window into the Alternative Media Ecosystem

In 2013, conspiracy theories seemed like a small part of the conversation. But by 2015, conspiracy theorizing was becoming an increasingly salient part of the online discussion, particularly around man-made crisis events (e.g., acts of terrorism and mass shooting events). An online community began to repeatedly claim that tragic events were not as they seemed and that they were hoaxes perpetrated by “crisis actors” or “false flag” events with hidden perpetrators.

The graph below (figure 1), which was created using user co-sharing patterns, depicts the information ecosystem that supported conspiracy theory rumoring on ­Twitter in 2016 (Starbird 2017). The nodes are web domains, connected when the same user sends tweets linking to both websites. Red nodes are the domains that supported the conspiracy theorizing. They include a variety of clickbait, conspiracy-laden sites, partisan news sites, and some other murkier domains—including state-sponsored media outlets tied to Russia and Iran. Blue nodes are domains that hosted content attempting to debunk the conspiracy theories. Yellow nodes contained factual articles that were cited by conspiracy theorists as evidence for their theories.

A manual content analysis of the different web domains revealed this conspiracy theory ecosystem (the red nodes) to be supporting a variety of different con­spiracy theories and pseudo-scientific claims as well as diverse claims about powerful people controlling world events—and a sort of all-encompassing conspiracy theory “frame” through which to interpret each new event. We theorized that the structure of an “alternative media ecosystem” had developed around and functioned to ­reinforce that corrupted frame.

Conspiracy theorizing and disinformation were sinking into the fabric of the internet, the recommendation algorithms, and the networks of friend and following relationships on social media in ways that would become increasingly hard to unwind.

Russian Interference: A Coordinated Disinformation Campaign

In 2016, my colleagues and I were studying “­framing contests” within the #BlackLivesMatter discourse. As part of that analysis, we created a retweet network graph of Twitter users who participated in conversations around shooting events where users employed either the #BlackLivesMatter hashtag or the #BlueLivesMatter hashtag. The graph revealed two separate communities or “echo chambers” on either side of the conversation: pro-#BlackLivesMatter on the left (where most of the accounts were also politically left-leaning or “progressive”) and anti-#BlackLivesMatter on the right (where accounts were consistently politically right-leaning or “conservative”). Our initial paper explored how those two different online communities created and spread very different frames about police shootings of Black Americans (­Stewart et al. 2017).

We published our study in October 2017. A few weeks later, Twitter (under pressure from a US congressional investigation) released a list of accounts associated with the Internet Research Agency in St. Petersburg, Russia, which was running disinformation operations online, targeting US populations during the same time period as our #BlackLivesMatter research. Scanning the list,[1] I realized that I recognized some of those accounts. We had featured several in our paper.

Starbird figure 2.gifConcerned about the implications, we cross-referenced the list of Russian trolls against our retweet network graph to see where they were in the conversation. At the time, the results surprised us (figure 2). The Russian trolls had infiltrated both “sides” of the #BlackLivesMatter discourse on Twitter. A few were among the most influential accounts in the conversation. One troll account on the left was retweeted by @jack, then-CEO of Twitter. Several orange accounts on the right had integrated into other online organizing efforts on the conservative side.

Ahmer Arif, a PhD student at the time, conducted in-depth qualitative research on these Russian troll accounts (Arif et al. 2018). He found several that enacted multi-dimensional online personas, across platforms, that played on stereotypes of Black Americans (on one side) and white US conservatives (on the other). They were impersonating activists and also modeling what online activism looked like—reflecting norms but also, to some extent, shaping them. Often their content wasn’t superficially problematic, for example, tweeting about “strong Black voices” on the left or support of US veterans on the right. In other places they were sowing and amplifying division. Some of their content was among the most vitriolic content in the space (e.g., advocating for violence against police on the left and using racial epithets on the right). In a few places, we can see them holding arguments with ­themselves—like a puppeteer, having one of their accounts on the left “fight” with one of their accounts on the right.

The events of 2016 led to increased awareness of our collective vulnerabilities to manipulation on social media. In the aftermath, there were numerous media articles and research studies documenting those weaknesses, as well as considerable work by social media platforms to attempt to address them. When we reflect on the story of online disinformation in 2016, we often think of it, predominantly, as foreign in origin, perpetrated by inauthentic actors (or fake accounts), and coordinated by various agencies in Russia. That was not the whole story, but it was a simple one that allowed researchers, media, and platforms to focus on outside actors and top-down campaigns.

The 2020 Election: Participatory Disinformation

Disinformation around the 2020 election—specifically the effort to produce and spread misleading claims of ­voter fraud—looked very different. That effort was primarily domestic, largely coming from inside the United States. It was authentic, perpetrated in many cases by “blue check” or verified accounts. And the 2020 disinformation campaign wasn’t entirely coordinated, but instead largely cultivated and even organic in places, with everyday people creating and spreading disinformation about the election.

We will have to work to understand both the risks of generative AI as well as the possibilities of employing it to support more trustworthy information spaces.

In recent work, my colleagues and I have attempted to explain how this campaign worked (Prochaska et al. 2023; Starbird et al. 2023). First, political elites set a false expectation of voter fraud. For example, in a tweet posted in June of 2020, then-President Trump claimed the election would be “rigged,” that ballots would be printed (and ostensibly filled out) in foreign countries, and that this would be “the scandal of our times.” This “voter fraud” refrain was repeated over and over again in the months leading up to the election. It became a frame through which politically aligned audiences would interpret the events of the election. And it led to a corrupted sensemaking process, where everyday people misinterpreted what they were seeing and hearing about as “evidence” of voter fraud, eventually generating hundreds of false claims and misleading narratives.

Our research team conceptualized the spread of “­voter fraud” rumors about the 2020 election as ­participatory disinformation, with elites (in media and politics) collaborating with social media influencers and everyday people to produce and spread content for political goals. Participatory disinformation takes shape as improvised collaborations between witting agents and unwitting (though willing) crowds of sincere believers. These collaborations follow increasingly well-worn patterns and use increasingly sophisticated tools. We theorize that participatory disinformation is becoming structurally embedded into the sociotechnical infrastructure of the Internet.

Conclusions

So what next? Unfortunately, the challenges of understanding online rumors and mitigating harmful dis­information and manipulation remain. On top of the dynamics described here, there are additional concerning trends. Advances in generative AI threaten to supercharge the spread of deceptive content. Under political pressure, many social media platforms have stepped back from their efforts to address misinformation and combat manipulation. Platform transparency is waning as well. The kinds of analyses I describe above are no longer ­possible—at least not on an academic budget. ­Researchers like myself are under attack from online conspiracy ­theorists and political operatives (Nix et al. 2023).

But I am still hopeful that we can turn the tides on the online disinformation problem. With challenge comes opportunity. Researchers from diverse fields will need to develop new methods to study new platforms under new constraints. We will have to work to understand both the risks of generative AI as well as the possibilities of employing it to support more trustworthy information spaces. And we will need to design, deploy, and evaluate potential remedies—based on a nuanced understanding of the evolving, participatory nature of the problem. We will not solve all of this with a single new design feature, platform, policy, or educational program. It is going to require all of the above and more. But I encourage those working in this space to keep chipping away.

References

Allport GW, Postman LJ. 1945. Section of psychology: The basic psychology of rumor. Transactions of the New York Academy of Sciences 8(2; series II):61-81.

Arif A, Stewart LG, Starbird K. 2018. Acting the part: Examining information operations within #BlackLivesMatter discourse. Proceedings of the ACM on Human-Computer Interaction 2(CSCW):1-27.

Kapferer, JN, 2017. Rumors: Uses, Interpretation and ­Necessity. Routledge.

Maddock J, Starbird K, Al-Hassani HJ, Sandoval DE, Orand M, Mason RM. 2015. Characterizing online rumoring behavior using multi-dimensional signatures. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW 2015):228-241.

Madrigal A. 2013. Hey Reddit, Enough Boston Bombing ­Vigilantism. The Atlantic, Apr 17.

Nix N, Zakrzewski C, Menn J. 2023. Misinformation research is buckling under GOP legal attacks. Washington Post, Sept 23.

Prochaska S, Duskin K, Kharazian Z, Minow C, Blucker S, Venuto S, West JD, Starbird K. 2023. Mobilizing Manufactured Reality: How Participatory Disinformation Shaped Deep Stories to Catalyze Action during the 2020 US Presidential Election. Proceedings of the ACM on Human-­Computer Interaction 7(CSCW1):1-39.

Shibutani T. 1966. Improvised news: A sociological study of rumor. Indianapolis: The Bobbs-Merrill Company Inc.

Starbird K. Examining the alternative media ecosystem through the production of alternative narratives of mass shooting events on Twitter. 2017. Proceedings of the International AAAI Conference on Web and Social Media 11(1):230-239.

Starbird K, DiResta R, DeButts M. 2023. Influence and improvisation: Participatory disinformation during the 2020 US election. Social Media + Society 9(2). https://doi.org/10.1177/205630512311779

Starbird K, Maddock J, Orand M, Achterman P, Mason RM. 2014. Rumors, false flags, and digital vigilantes: Mis­information on twitter after the 2013 Boston Marathon bombing. Proceeeding of iConference 2014. https://doi.org/10.9776/14308

Stewart LG, Arif A, Nied AC, Spiro ES, Starbird K. 2017. Drawing the lines of contention: Networked frame contests within #BlackLivesMatter discourse. Proceedings of the ACM Conference on Human-Computer Interaction 1(CSCW):1-23.

 


[1]  https://democrats-intelligence.house.gov/uploadedfiles/ exhibit_b.pdf

About the Author:Kate Starbird is associate professor, Department of Human Centered Design & Engineering (HCDE), and director, Center for an Informed Public, the University of Washington.