In This Issue
Voting Technologies
June 1, 2007 Volume 37 Issue 2

Elections and the Future of E-Voting (editorial)

Wednesday, December 3, 2008

Author: Lewis M. Branscomb

Editor’s Note

In February, we honored Bill Wulf with a symposium on a subject of his choosing, The Impact of Technology on Voting and Elections in the 21st Century. Bill has been a tireless advocate before Congress and the nation for improving our capability and increasing our knowledge in all the areas of cyber security. When he says, “engineering is design under constraints,” he understands that the most important constraints are not imposed by the laws of physics. They are the constraints that arise from the limitations of human individuals, of socioeconomic institutions, of human preferences, fears, and sentiments such as elation and outrage. Nowhere in our society does engineering confront these human constraints more seriously than in the functions that sustain our democracy—registering to vote, voting, tallying the vote, reporting exceptions and results, and accepting the certified outcome with confidence that it reflects the collective intent of eligible voters.

The Constitution leaves the rules and means by which elections are conducted largely to the states, rather than the federal government. Elections often have not only federal impact (for presidential electors, senators, and representatives) but also state and local impacts (for governor and many other offices), and they usually take place on the same day and even the same ballot.

Almost every important decision about elections is made by state governments (mostly by secretaries of state, who certify election results in many cases with no oversight). Other elected officials at the local level also have important responsibilities, sometimes conflated with their own candidacies. We depend on their commitment to free and fair elections, as well as their competence and ability to administer rules, processes, certification of equipment, and training of election workers.

Most election officials, even though they have been appointed by a particular party, have been fair and sincere. They understood the tools used to interpret and count votes, and these functions were carried out without discrimination or fraud. But today, with more complex computer-based election technologies and bitterly contested elections, the burdens of objectivity and competence and the need for constant learning have increased dramatically. To add to these burdens, the financing of election processes is often inadequate. The training of poll workers and the testing, recounting, and auditing of votes—in fact, almost everything to do with the administration of voting systems—has been left to counties or parishes (in New England to townships) with their hundreds of precincts and different ballot styles.

Elections were never perfect, even in the world of paper ballots stuffed into boxes and lever machines with outputs read off of dials. Today, volunteers are faced with electronic voting machines manufactured and maintained by private firms that have software that hasn’t been rigorously tested and source code that is not available to experts of all political persuasions. A few years ago one of those companies was reportedly led by an executive who made no secret of his intention to do whatever he could to re-elect the president.

Voting technology must also make voting accessible to people who speak dozens of languages, have a wide range of handicaps, or may be serving overseas. They must present understandable ballots for voting on local judges, numerous propositions, and candidates for federal and all levels of state and local government. These non-partisan requirements present many difficulties for election officials, who are always trying to minimize expenditures of cost and time.

Add to this list of constraints an issue on which Democratic and Republican state legislators agree: that gerrymandering is a great way to keep all of them in office. Only the fairest, least corrupt, most understandable, and least discriminatory voting system can give voting minorities in these districts a chance to vote incumbents protected by name recognition and gerrymandered districts out of office.

Can new technologies improve the situation? Can they ensure that the intent of the voters is reflected in the official tally of the vote? What requirements should we place on voting technologies and procedures to ensure they are credible when margins of victory may be as small as a fraction of a percent? Voters must have a substantive reason for trusting that their intentions have been correctly interpreted and recorded, and that their votes have been counted correctly. But voter confidence is not enough. The machines and procedures must be proven to perform reliably.

And if the vote in an election is certified by partisan election officials with the result that their party’s candidate wins by a few hundred votes out of a quarter of a million (as happened last fall in the 13th Congressional District of Florida), the candidate certified to have lost will surely demand a recount if allowed to do so. But who will pay for this recount and how will it be done? If the election used touch screen technology, with or without a paper record, will an electronic recount more accurately reflect voter intent? Or will the machine’s computer memory repeat the same questioned total each time?

What if repeated recounts—of paper or electronic ballots—give a random variation of a few hundred votes each time? Many election officials believe that if human beings do their best, there is a minimum below which non-repeatable errors cannot be reduced. But there are techniques for interpreting and counting that can provide more precise results with each iteration. These are rarely used, sometimes because of the expectation of fallibility in hand counting. In New Mexico, I am told, a poker hand played by the contesting candidates has been used to determine the outcome in a “tie.” How should voting procedures deal with such “ties” or near misses?

A related question arises about systematic errors in technological voting systems. These errors are not reduced by repetition. Only human intervention involving further study and analysis and a hand count can reduce them. Are some voting technologies, highly accurate in the absence of undiscovered software errors, vulnerable to being sufficiently perturbed to tip an election? Are election officials willing to permit public oversight to ensure that the public believes an election is fair and not that information has been hidden or simply overlooked? Can the public be confident that technology and election procedures combined can handle a very close election, such as the recent contest in Florida, with accuracy, transparency, verifiability, reliability, security, anonymity, and privacy?

The papers in this issue address many of these concerns. An early technical response to the razor-thin margin of victory for President Bush in 2000 was the creation of the Voting Technology Project (VTP), a collaborative initiative by Caltech and MIT. Michael Alvarez, professor of political science and co-director of the project, and Erik Antonsson, professor of mechanical engineering at Caltech and a participant in the project, address the importance of bridging the gap between technology and politics in voting. Commissioner Gracia M. Hillman describes the work of the Election Assistance Commission, which was established by Congress in 2002 to assist states and ensure that elections in 2004 and thereafter would be free and fair.

David Jefferson, computer scientist at Lawrence Livermore Laboratory and chair of the California Secretary of State Voting Systems Technology Assessment and Advisory Board, addresses the mysterious undervote of about 18,000 in the 2006 congressional election in one county in Florida’s 13th Congressional District. Congressman Rush Holt describes his bill, HR 811, which he introduced in February 2007, to develop requirements for electronic voting technology and reform current federal election laws.

Eugene Spafford, professor of computer science and electrical and computer engineering at Purdue University, discusses how voter confidence can be increased through better verification of voting systems. Michael Ian Shamos, Distinguished Career Professor in the School of Computer Science at Carnegie Mellon University, argues that the most serious problems with voting technologies can be addressed by better engineering design.

Our hope is that these articles will contribute to the realization of an election and voting environment that satisfies the needs of voters and election officials. Under rules like those proposed by Congressman Holt, a voting technology might emerge that is acceptable to all of the states. New rules and reliable, trustworthy voting systems might win over voters and become known as the solution that overcame the constraints and saved our democracy from itself.

About the Author:Lewis M. Branscomb is Professor Emeritus, John F. Kennedy School of Government, Harvard University, Visiting Faculty, School of International Relations and Pacific Studies, and an NAE member.