In This Issue
Voting Technologies
June 1, 2007 Volume 37 Issue 2

Bridging Science, Technology, and Politics in Election Systems

Wednesday, December 3, 2008

Author: R. Michael Alvarez, Erik K. Antonsson

After the 2000 presidential election, Caltech and MIT initiated the Voter Technology Project to address problems with voting systems.

Shortly after the tumult of the evening of November 7 and the morning of November 8, 2000, the presidents of Caltech and MIT challenged us to solve the technological problems that had arisen in the election, especially with the punch-card voting systems that were widely disparaged after Florida’s presidential contest. Our initial research team spanned the continent and involved two campuses with different research and administrative cultures. The team also spanned many disciplines—computer science, economics, human-factors research, mechanical engineering, operations research, and political science. In addition to taking advantage of the faculty resources on both campuses, the group included staff and students, both undergraduate and graduate.

For better or worse, the more our research team studied what happened in the 2000 presidential election, the more we became convinced that the problems could not be easily resolved because, in addition to technology, they involved people, procedures, and politics. The work of the Caltech/MIT Voting Technology Project (VTP) in the immediate aftermath of the 2000 presidential election was controversial, but it served as a platform for research and reform in the years that followed. In this article, we discuss how we undertook this research project, how it has evolved over the past seven years, and the issues we believe are critical to advancing the science of elections.

Historical Background
Prior to the 2000 presidential election, little academic research had been done on voting technology or election administration (Alvarez and Hall, 2004). In addition, most of the work conducted prior to the turn of the twenty-first century was not multidisciplinary. Research was primarily published by historians, political scientists, scholars of public administration, or technologists—written primarily for specialists in their fields of interest. Thus, when the public controversy over the 2000 presidential election arose and the presidents of Caltech and MIT called upon us to initiate a major research and policy project, we found there was not much literature about research on voting technology to draw upon and almost nothing on voting as an interrelated system of equipment, people, media, laws, and regulations.

Perhaps not surprisingly, we found this situation profoundly troubling. Voting and elections have been described as the “DNA of democracy,” and, clearly, the system of government in the United States depends on fair and open elections. Nevertheless, no systematic understanding of elections or voting systems had been developed. The situation was, and to a significant degree still is, analogous to medicine at the end of the nineteenth century—based largely on empiricism, not backed by science, supported by ad hoc apparatus, and practiced by individuals with a wide range of competency levels and training. Fortunately, our republic has been protected by diligent, hardworking election officials, people of extraordinary good will, and a diversity of election apparatus and technologies. The latter—the current (and historical) patchwork of election equipment used in the United States—kept problems confined to local areas, rather than causing systemic harm.

In many ways, the 2000 presidential election revealed the potential for the development of a new field of academic research, the study of voting technology, which could have a profound impact on the understanding, conduct, and credibility of elections. Even relatively simple questions, such as the criteria that should be used to evaluate the performance of voting technologies and the metrics that might be used in such an evaluation, had not been addressed in readily available, established, peer-reviewed, research literature. Obtaining information about voting technologies turned out to be not only difficult, but, in a distressingly large number of cases, impossible. We could not even determine how much states and lower-level election jurisdictions spend annually to administer their elections. Not surprisingly, this made our research much more difficult than we initially imagined. Thus most of our early work immediately after the 2000 election was focused on data collection.
__________________________

In 2000, no systematic
understanding of elections
or voting systems had
been developed.
__________________________

We struggled to define appropriate criteria by which to evaluate the performance of voting technologies, especially in light of the difficulties of determining voter intent in Florida counties that used pre-scored punch cards (where overvotes, undervotes, and partially separated “chads” bedeviled attempts to establish a single, accurate vote count). Finally, members of the VTP research team came up with an important metric—the “residual vote”—the percentage of ballots cast in an election jurisdiction that did not produce a valid vote in a specific race, most importantly in the top-of-the-ballot race, usually for president or governor (Ansolabehere, 2004; Ansolabehere and Stewart, 2006; Sinclair and Alvarez, 2004).

The residual-vote metric, as used in our preliminary study of the accuracy of voting machines, indicated that punch-card voting systems performed relatively poorly. But, to our surprise, we also found that the electronic voting systems in use in 2000 and earlier elections also had relatively high residual-vote rates (a finding that subjected us to much public criticism).

In the years since, the residual-vote metric has been used in many peer-reviewed studies and has become a standard in studies of the performance of voting systems across time and space (e.g., Stewart, 2006). But this metric is far from perfect. First, it aggregates overvotes and undervotes, thus making it impossible to differentiate two distinct phenomena. Second, data for over-votes and under-votes are rarely reported by election jurisdictions.

Despite these limitations, the residual-vote metric is extremely useful. It can be used to compute votes across election jurisdictions and for more than one election. So, even though it is flawed, it currently provides the best source of comparative data for studying the performance of voting technologies. These data, for example, enabled VTP researcher Charles Stewart III to examine the effects of changes in voting technologies, both specifically, in states such as Georgia, and throughout the nation (Stewart, 2006).1

_________________________

The residual-vote metric
can provide comparative
data on the performance of
voting technologies.
_________________________

Despite the challenges we faced, the VTP has made considerable progress in developing and sustaining a productive research agenda. We issued a series of major policy reports, beginning in June 2001 and continuing to the present. We helped shape the Help America Vote Act (HAVA) in 2002, and since then we have provided many forms of assistance to governments across the nation working to implement the provisions of HAVA. We have hosted a number of research- and policy-oriented conferences and workshops (including three in the past year, one on voter registration and authentication, one on studying election fraud, and a unique workshop to help researchers and vendors identify research opportunities on voting technologies).2 To date, VTP researchers have published two books (two more are forthcoming in the next year), 19 journal articles, 24 policy reports, more than 50 working papers, and five student theses. We have also studied election processes in the United States (e.g., in California, Georgia, Massachusetts, New Mexico, and Utah) and abroad (e.g., in Argentina, Estonia, and Ireland).

Difficulties of Studying Voting Technology
As we determined early in our research, we expect voting systems to perform many tasks, including authenticating voter legitimacy, recording voter intent accurately, enabling confidential and anonymous voting, providing equal protection and equal access, preventing fraud, tallying votes accurately, complying with standards and legal requirements of voting systems, and inspiring public confidence in elections. However, we also determined early on that these functions could not be analyzed based solely on the perspectives, theories, and methodologies of a single academic discipline.

Analyses of each function of a voting system require multidisciplinary research. For example, voter authentication, a procedural issue, is a matter of state law and local practice. It may influence the behavior of voters and the strategies of candidates, and it will increasingly be a subject for the development of new technologies. To fully understand voter authentication, then, requires that researchers break out of traditional academic disciplines and collaborate with researchers in other disciplines—for example, Michael Alvarez (a political scientist) working with Erik Antonsson (a mechanical engineer)! Analyzing voter authentication might involve scholars who specialize in law, political behavior, public administration, and technology.

We have also learned since 2000 that studying these problems requires complex collaborations that reach beyond the academic sector altogether. We must learn from, and work with, three other stakeholder groups—election officials and policy makers, vendors of voting systems, and advocates and other political groups and individuals—to produce research results that can effectively resolve the problems observed in the American electoral process.

Each of these groups has knowledge and resources that we need for our research to be effective. Election officials and voting-system vendors have intimate knowledge of how elections are run in America; election officials in particular have the kinds of statistical data we need to study election results. Advocates and political entities (candidates, parties, and other groups organized for political purposes), who either aspire to elected office or are using the electoral process for policy or political purposes, have a direct stake in the way elections are conducted. Although we must remain entirely neutral and uninvolved in partisan or ideological political struggles, the advocate and political communities often have knowledge and resources we can use to make our work more effective.

Building a Science of Elections
After looking back at how our research project has evolved since 2000 and looking ahead to the daunting research problems that still need work, we advocate focusing on the development of a new science, the study of elections. This new science must be multidisciplinary, will require nontraditional research collaborations (spanning academic institutions and bridging the academic-private sector divide), and will require the development of a scientific infrastructure. We have already discussed the need for collaborations and multidisciplinary research. In this section we address the issue of infrastructure.

Developing a new scientific infrastructure will be difficult, and the primary issue that must be addressed is research funding. To date, VTP has been funded primarily by grants from two private-sector foundations, the Carnegie Corporation of New York and the John L. and James S. Knight Foundation. These foundations, and a few others, have also supported research and policy projects on voting technology.

To date, the primary public-sector research-funding entity, the National Science Foundation (NSF), has issued only two sizeable grants for the study of voting technology—one to a University of Maryland-led group on the usability of voting technologies and one to a Johns Hopkins study of voting-technology security. Despite a long list of required research areas listed in HAVA, the U.S. Election Assistance Commission (EAC), the other main government funding entity, has not sponsored any major research projects that involve basic research on voting technology or election administration. Limited by a lack of funds, EAC has only supported a few small, narrowly focused research projects (e.g., the collection of election-administration survey data for recent federal elections).

The lack of large-scale funding, especially from the federal government, has made it extremely difficult for academic researchers to engage in the sustained, focused research necessary for meaningful studies of voting technology. Given the many mandated studies listed in HAVA and the clear need for large-scale, sustained research on basic questions, such as voting-system usability, security, reliability, and accuracy, the federal government must provide funding, through both NSF and EAC, for the study of voting technology and election administration.

In addition, the academic research community engaged in the study of voting technology and election administration sorely needs highly visible, highly credible research outlets. Not a single peer-reviewed journal exists for studies of voting technology and election administration. Therefore, research studies in this new field are being published in a multiplicity of outlets, ranging from non-refereed websites and blogs and non-refereed conference proceedings and professional publications to highly credible, peer-reviewed academic journals. However, our experience (and the experience of many colleagues) shows that publication in the latter is extremely difficult (even though these are the kind of outlets that could establish the credibility of a research report) because all submissions are subject to the usual type of academic review, replication, and discourse. Reviewers in political science journals, for example, often demand a strong theoretical rationale for an empirical analysis of residual-vote rates; but, because there is no strong theory to draw upon, it is nearly impossible to satisfy this otherwise typical demand in the political science peer-review process. At the other end of the spectrum, voting systems are not close enough to the cutting edge of technology to generate research results suitable for publication in scholarly engineering journals.
_____________________________

There are no peer-reviewed
journals for studies of
voting technology or
election administration.
____________________________


The lack of credible, peer-reviewed publication outlets is not just an academic concern. Under the present circumstances, it is difficult for policy makers and election officials to distinguish between legitimate and illegitimate research. We have heard this complaint frequently from election officials and policy makers in recent years. They clearly need access to a publication or research distribution system that identifies credible research.

Thus the second important step toward building a scientific study of elections is the development of new, scholarly, peer-reviewed journals for the publication of research results in voting technology and election administration. These publication and distribution outlets should take advantage of new technologies, such as web-based review and publishing, to shorten the time from submission to publication and provide readers with identifiably credible research.

Another alternative is for an entity, such as NSF, EAC, or even the National Academies, to develop an equivalent of the National Institutes of Health “PubMed”—a digital archive of biomedical and life-sciences research (especially government-sponsored research)—in the area of election science. This initiative, which might be called “PubVote” or “Election Science,” could collect digital copies of research related to voting systems and election administration by researchers and peer-reviewed academic research outlets. Thus it would be a one-stop destination for anyone searching for research results on these topics.

Finally, to facilitate collaboration and the dissemination of research results, we need better channels of communication among scholars and between the research community and the many other stakeholders mentioned earlier. One important function of VTP has been to convene periodic conferences and workshops on special topics (e.g., studies of voting technology, voter authentication, and voter registration). Recently, VTP sponsored a small workshop at Caltech of about two dozen researchers and representatives of the voting-technology industry for an intense one-day discussion of research questions and collaborations. These gatherings, how-ever, have been difficult to put together, difficult to facilitate, and, frankly, have used up scarce financial resources that could have supported more research.

We must find ways to initiate these conversations and to ensure the free flow of research ideas among researchers and between researchers and other stakeholders. Such conversations will improve academic research and will ensure that results get into the hands of those who can take action to improve the American electoral process.

Conclusion
The good news is that election systems, and elections themselves, are improving. The harsh light that illuminated the flaws in the 2000 national election led to academic research, provoked legislative action, spurred innovation in the marketplace, and educated the public about the strengths and weaknesses of elections in the United States. But as a nation, we can, and we must, do better. We need a better understanding of the interrelationships between voter registration and authentication systems and voter turnout. We need to develop clear standards for counting or disqualifying ballots. We need to understand how to reliably determine voter intent. In short, we need to understand election systems, and that understanding will come about with the establishment of a legitimate field of election science.

Acknowledgment
The authors wish to thank Melissa Slemin for her assistance.

References
Alvarez, R.M., and T.E. Hall. 2004. Point, Click and Vote: The Future of Internet Voting. Washington, D.C.: The Brookings Institution Press.
Alvarez, R.M., S. Ansolabehere, and C. Stewart III. 2005. Studying elections: data quality and pitfalls in measuring the effects of voting technologies. Policy Studies Journal 33(1): 15–24.
Ansolabehere, S. 2004. Voting machines, race and equal protection. Election Law Journal 1(1): 61–70.
Ansolabehere, S., and C. Stewart III. 2006. Residual votes attributable to technology. Journal of Politics 67(2): 365–389.
Sinclair, D.E., and R.M. Alvarez. 2004. Who overvotes, who undervotes, using punchcards? Evidence from Los Angeles County. Political Research Quarterly 57(1): 15–25.
Stewart, C. III. 2006. Residual vote in the 2004 election. Election Law Journal 5(2): 158–169.
VTP (Voting Technology Project). 2001. Voting: What Is, What Could Be. Available online at http://www.vote.caltech.edu/media/documents/july01/July01_ VTP_Voting_Report_Entire.pdf.

FOOOTNOTES

1 See also Residual Vote by State, 1996 and 2000, on p. 89 of Voting: What Is, What Could Be, which shows that VTP researchers could not produce estimates of residual-vote rates for a number of states (VTP, 2001). See also Alvarez et al., 2005.
2 For a complete list of events sponsored and organized by VTP since 2000, see http://www.vote.caltech.edu/events.htm.

About the Author:R. Michael Alvarez is senior fellow, USC Annenberg Center for Communication; professor of political science, California Institute of Technology; and co-director, Caltech/MIT Voting Technology Project. Erik K. Antonsson is professor of mechanical engineering, California Institute of Technology; director of research, Northrop Grumman Space Technology; and a member of the Caltech/MIT Voting Technology Project. This article is based on a presentation given on February 8, 2007.