biometric identification machine failure and electoral...
TRANSCRIPT
Biometric Identification Machine Failure and ElectoralFraud in a Competitive Democracy 1
Miriam Golden*, Eric Kramon†, George Ofosu*, and Luke Sonnet*
*University of California, Los Angeles†George Washington University
August 11, 2015
Word Count: 10675(including notes, references, tables and figures;
excluding appendices)
Version 5.1
Key words: elections, electoral fraud, biometric identification, Ghana
1Corresponding author: Miriam Golden, Department of Political Science, University of California atLos Angeles, CA 90095; email:[email protected]. Earlier versions of this paper were presented at the2014 annual meetings of the American Political Science Association, August 28–31, Washington, D.C., atEGAP: Experiments in Governance and Politics 13, University of California at Los Angeles, February 20–21, 2015, at Columbia University, April 15, 2015, and at the University of Michigan, April 24–25, 2015. Weare grateful to Jasper Cooper and Tara Slough for the extensions they suggested. In Ghana, we acknowledgethe collaboration of our research partner, the Centre for Democratic Development, as well as the Coalition ofDomestic Election Observers. We also thank our 300 wonderful field assistants for data collection. JosephAsunka and Sarah Brierley collaborated on the design and implemention of the original project. BronwynLewis provided research assistance. Funding came from the U.K.’s Ghana office of the Department forInternational Development, a National Science Foundation Grant for Rapid Response Research (RAPID)SES–1265247, and the UCLA Academic Senate, none of which bears responsibility for the results reportedhere. This research was approved by the University of California at Los Angeles IRB #12− 001543 onOctober 26, 2012.
Abstract
We study election fraud in a competitive but not fully consolidated multiparty democracy. Resultsfrom a randomized field experiment are used to investigate the effectiveness of newly-introducedbiometric identification machines in reducing election fraud in Ghana’s December 2012 nationalelections. We uncover a non-random pattern to the frequent breakdowns of the equipment. Inpolling stations with a randomly assigned domestic election observer, machines were about 50percent less likely to experience breakdown as they were in polling stations without observers. Wealso find that electoral competition in the parliamentary race is strongly associated with greatermachine breakdown. Machine malfunction in turn facilitated election fraud, including overvoting,registry rigging, and ballot stuffing, especially where election observers were not present. Ourresults substantiate that partisan competition may promote election fraud in a newly-establishedcompetitive democracy. They also show that domestic election observers improve election integritythrough direct observation and also thanks to their second-order effects on election administration.[156 words]
1 Introduction to the Problem
Most countries in the world use elections to select their political leaders, but in new, fragile, or
unconsolidated democracies, the electoral process may be compromised by strategic manipulation
on the part of various actors. Election fraud is common in these settings. How fraud occurs, who
perpetuates it, and which preventive efforts are effective are still poorly understood.
At least two classes of responses have been mounted in the contemporary world to the prob-
lem of election fraud. The first involves deployment of election observers, especially teams from
international bodies whose missions include election integrity. Research shows that international
election observers operate as anticipated, successfully reducing election fraud (Enikolopov et al.,
2013; Hyde, 2007, 2010, 2011; Ichino and Schündeln, 2012; Kelley, 2012; Sjoberg, 2012). The
second response to election fraud involves the introduction of new technologies aimed at expos-
ing — and thereby reducing — malfeasance. Technological solutions, such as electronic voting
machines, polling station webcams, and biometric identification equipment, offer the promise of
rapid, accurate, and ostensibly tamper-proof innovations that are expected to reduce fraud in the
processes of registration, voting, or vote count aggregation. Little is known, however, about the
effectiveness of these technologies in reducing fraud, and it seems likely that this will vary ac-
cording to the political context (Bader, 2013). Although we can be relatively certain that election
observation reduces (without entirely eliminating) election fraud, we do not know whether or when
new technologies operate as intended.
Using biometric markers, such as fingerprints, that are almost impossible to counterfeit,
biometric identification machines authenticate the identity of the individual (Jain, Hong and Pankanti,
2000). Biometric identification is particularly useful in settings where governments have not pre-
viously established reliable or complete paper-based identification systems for their populations
(Gelb and Decker, 2012). Thanks to their supposed in-built capacity to prevent or substantially
reduce fraud in the distribution of government allocations or services, biometric identification sys-
tems are already in widespread use for voter registration. As of early 2013, 34 of the world’s low
and middle income countries had adopted biometric technology as part of their voter identification
system (Gelb and Clark, 2013) and 25 sub-Saharan countries have held elections with biometric
voter registers in place (Piccolino, 2015, p. 5).
Despite the obvious difficulties in counterfeiting biometric markers, studies of biometric
authentication systems have questioned whether they are tamper-proof in the real world. In India,
a country in the process of distributing national identity biometric smartcards for the delivery
of numerous government goods and services, including pensions and poverty relief, concern has
been raised about the potential of local vested interests to strategically manipulate the process in
ways that subvert the accurate delivery of government goods to intended recipients (Muralidharan,
Niehaus and Sukhtankar, 2014). This concern overlaps with results from research claiming that
polling place webcams reduce ballot stuffing but do not reduce electoral fraud overall; instead of
ballot stuffing, incumbents switch to other methods of fraud that are out of sight of the camera
(Sjoberg, 2014). New forms of monitoring may only induce new forms of evasion.
In this paper, we report results of a study that uses a randomized field experiment to study
the impact of election observers on the malfunction of biometric identification machines. Our
study is set in Ghana during the 2012 national elections, when biometric identification machines
were introduced into every polling station in the country as a way to reduce the very high levels of
fraud known in particular to affect voter registration. We randomly select a large sample of elec-
toral constituencies and polling places in four of ten Ghanaian regions, home to half the country’s
population, and study whether election observers systematically affect machine malfunction. We
also report the effect of observers on fraud in order to facilitate the analysis of the complex rela-
tionship between observers, machine malfunction, and electoral fraud. The results regarding the
effect of observers on electoral fraud were first reported in Asunka et al. (2015) and are repeated
and extended here.
2
Our main results include a non-random pattern in machine breakdowns. We first docu-
ment that machines were much more likely to break down in polling stations without an election
observer present. Second, we provide (non-experimental) evidence that machine breakdown was
more prevalent in electorally competitive areas. Third, we also find that three markers of election
malfeasance — overvoting, registry rigging, and ballot stuffing — were more common in polling
stations affected by the breakdown of the biometric identification machines, especially when an
election observer was not present. We interpret results as evidence that individuals intended to in-
terfere with the operation of biometric identification machines and also took advantage of machine
breakdowns to commit electoral fraud.
As far as we are aware, ours is the first study to find evidence of widespread and poten-
tially consequential tampering with biometric identification equipment used at scale in a real-world
setting. More than a third of polling stations without a trained, non-partisan domestic election ob-
server experienced machine malfunction, double the rate at stations with an observer. The extent
of the problems that we identify in the operation of the verification equipment in Ghana’s 2012
elections may be a transitory artifact of the initial roll-out of the hardware. Nonetheless, one impli-
cation of our study is that technological solutions are valuable but insufficient for solving political
problems when political interests have the incentive and ability to manipulate the actual opera-
tion of the technology. In the context of this investigation, our results underscore the importance
of independent and non-partisan election observation by trained personnel who are professionally
committed to clean elections.
The contribution of this paper is twofold. First, this is one of only a few studies to inves-
tigate the causal dynamics of election fraud in a competitive democracy (others include Cox and
Kousser (1981); Ichino and Schündeln (2012); Lehoucq (2002)). Drawing on substantive insights
from these earlier observational studies, we gather data using an experimental research design and
test whether partisan competition and party organizational capacity encourage fraud. Second, to
3
the best of our knowledge, this is the first systematic empirical investigation of how well biometric
identification machines operate in an electoral context.
The paper is organized as follows. We first discuss the reasons that electoral fraud occurs,
who commits it, and why they do so. From this, we draw out hypotheses. We then provide
information on the setting we study. A fourth section presents our research design. A fifth section
studies the patterns of breakdown of the biometric identification machines and a sixth investigates
whether machine breakdown is associated with higher rates of electoral fraud.
2 Theory and Hypotheses
Election fraud is widespread globally. Eighty percent of elections around the world are observed
by monitors in efforts to reduce fraud (Kelley, 2012). It is of sufficient magnitude that it reportedly
affects the outcome for the executive branch of government in about a fifth of the world’s elections
(Keefer, 2002). Most elections that experience any significant level of fraud are in poor or middle
income countries or in countries with incomplete, new or unstable democratic institutions. Because
detecting fraud is difficult, understanding why it occurs and who perpetuates it constitute difficult
intellectual problems. We distinguish two sets of theories of the causes and perpetrators of election
fraud: incumbent-centered and party-centered theories. The first is relevant mainly to authoritarian
or semi-authoritarian settings and the second to democratic settings.
An incumbent-centered theory of election fraud derives from studies set in non-democratic
countries, defined as countries in which election outcomes do not exhibit uncertainty (Przeworski,
2008). Thanks to their control over the election administration, authoritarian incumbents are par-
ticularly well placed to commit election fraud. The incumbent-centered theory has been supported
by experimental results of studies in contemporary countries that document that election observers
reduce vote shares of incumbents in authoritarian or semi-authoritarian regimes (Callen and Long,
2015; Hyde, 2007; Enikolopov et al., 2013). These results imply that election fraud is centrally
4
orchestrated to benefit a sitting president (or other executive branch office holder) and that it is
perpetuated with the involvement of officials who are part of the body responsible for the adminis-
tration of the election. Incumbent-centered theory resonates with historical research documenting
widespread election fraud during the extended process of democratization in Europe (Mares, 2015;
Ziblatt, 2009) and in Latin America (Baland and Robinson, 2008), when economic and political
elites utilized fraud to obstruct and delay democratization.
When there is no uncertainty over who will win the election — which occurs under non-
democratic political institutions — election fraud is not conducted in order to win the election.
This raises the question of why it takes place. One reason that incumbents commit fraud is to
increase their vote shares to levels that allow them to retain constitutional veto power (Magaloni,
2006). In these situations, election fraud is aimed at retaining a supermajority. Other reasons that
authoritarian leaders commit fraud is to signal to voters that potential opponents comprise small
numbers, that opposition is likely to be fruitless, and that the current rulers are invincible. In this
case, election fraud is aimed at discouraging anti-regime protest or the formation of an organized
opposition. The second set of reasons that incumbents commit fraud even when they know they
will retain power is thus informational (Simpser, 2013).
Democratic settings, which are marked by robust party competition and genuine uncer-
tainty over whether the sitting executive will retain office, naturally give rise to an alternative
theory of election fraud. In democracies, electoral competition is organized by political parties.
These are therefore the relevant actors with interests in committing election fraud. Election fraud
occurs when political parties use localized control over the administrative apparatus to rig the vote
or when they engage in intimidation or patronage-based threats over voters in order to gain votes
or reduce turnout. The heart of the theory of election fraud that we utilize is that it is committed by
political party agents in order to win competitive elections. The localized and incomplete control
over the election machinery exerted by any single party — even the governing party — means that
both the governing party and the opposition have the ability to engage in election fraud.
5
In a democratic context, we expect that more election fraud occurs where political parties
have more incentive and opportunity to carry it out. We study two hypotheses regarding electoral
fraud (similar to Asunka et al. (2015)). With respect to incentives, parties will seek to commit fraud
in order to increase their vote shares in the settings where electoral competition is most intense
(Lehoucq, 2002; Molina and Lehoucq, 1999). This will vary in systematic ways with the nature of
the electoral system (Birch, 2007) as well as locally with the specific balance of partisan forces.
Opportunities to commit fraud, finally, are reduced where other independent actors, including
other political parties as well as the courts and a free press, are able to monitor and report on
it. The deployment of trained and neutral election observers is one of the most common ways
that international and domestic actors seek to reduce electoral fraud (Hyde, 2011; Kelley, 2012).
The effectiveness of these observers lies in part in the domestic political context, and whether the
government self-commits to the rule of law and fair elections (Fearon, 2011). These considerations
generate in a natural and straightforward way the following two hypotheses that are testable across
localities within a single country:
H1 Incentives: Fraud should be more prevalent with greater partisan competition;
H2 Opportunity (a): Fraud should be more prevalent where election observation is less or absent.
Theories of election fraud do not offer any particular guidance regarding the possible im-
pact of biometric verification machines other than the obvious expectation that this technology
reduces fraud. Because in Ghana the machines were introduced to every polling station in the
country in the 2012 election, we have no way to assess how effective this introduction was or the
size of the impact of the machines on the overall incidence of fraud. This would require that ma-
chines have been delivered to a random sample of polling stations, which was not the case. As
an alternative, we can make inferences based on observing what happens when a machine fails
to operate. This is not identical to a situation where no biometric verification machine is present
in the polling station, but it gives rise to a similar idea: namely, an operational biometric verifi-
6
cation machine reduces opportunities for interested parties to commit election fraud. If biometric
verification machines reduce election fraud, then we can expect:
H3 Opportunity (b): Fraud should be more prevalent where machines break down.
These hypotheses are all relatively intuitive. Less obvious is how much election fraud
to expect in a context such as the Ghanaian. The country boasts a stable, well-functioning two
party system, a relatively low degree of corruption, a vibrant and rapidly growing economy, and a
respected system of courts and law. These characteristics suggest that election fraud will be con-
tained. But it is nonetheless a relatively new democratic system, where the rules of the game are
still being internalized, where the rents of public office are highly valued, and where public confi-
dence in the political system is still evolving. The latter characteristics make Ghana vulnerable to
election fraud.
3 The Setting
Ghana is one of sub-Saharan Africa’s democratic success stories. Home to a population of 25
million, the country has a competitive, stable two-party system. Alternation of the political party
holding the presidency has twice occurred (2000 and 2008) since adoption of the 1992 constitution
and establishment of the country’s Fourth Republic. The two major parties — the New Patriotic
Party (NPP) and the National Democratic Congress (NDC) — enjoy support from roughly equal
numbers of voters, together claiming more than 95 percent of the vote, and the NDC and the NPP
hold all but four of the country’s 275’s parliamentary constituencies. In the 2008 presidential elec-
tions, the NDC won the executive with a margin of 40,000 votes out of an electorate of 14 million,
illustrating the highly competitive nature of national politics. Electoral violence is relatively rare,
voter turnout is high, and the NDC and NPP exhibit modest but genuine programmatic differences
as well as partially distinct social bases of support.
7
The president is elected by majority vote in a single, nationwide district. The country’s
unicameral parliament comprises 275 representatives elected from single-member constituencies,
which constitute the main levels of party organization. Elections are held simultaneously for par-
liament and presidency. Partisan competition is not evenly distributed across the country nor its ten
regions; each party has stronghold areas. The NPP is especially concentrated in the Ashanti region,
whereas the NDC receives a particularly concentrated vote share in Volta (Fridy, 2007; Morrison
and Hong, 2006). These two regions are commonly thought of as party strongholds, whereas
the other eight regions exhibit greater partisan competition. We include constituencies from both
stronghold regions in our sample as well as from two regions that are highly competitive.
Elections in Ghana have been systematically observed by a Coalition of Domestic Election
Observers (CODEO) since 2000, building on experience observing the 1996 election. The orga-
nization recruits and trains professionals — typically school teachers and college students — in
neutral, non-partisan observation of the electoral process. Thanks to their professions, observers
benefit from high status in their communities; for this reason, CODEO assigns observers to polling
stations in their home areas, where observers are likely to be personally known and to enjoy com-
munity respect. CODEO itself is nationally well known, with a strong public reputation for its
work in improving electoral integrity. Its observers are recognized and accredited by the Electoral
Commission of Ghana (EC) and have the legal right to enter and observe election proceedings.
Each CODEO observer is assigned a single polling station to observe for the whole of the election
day, including the public counting of votes that occurs at the end of the process. Polling places
selected for observation are not identified publicly in advance of the election itself, meaning that
officials and voters at every polling station may realistically anticipate an observer. Observers are
distinguishable by uniforms and identifying paraphernalia (tee-shirts, hats, etc) and carry official
accreditation materials with them.
The training of election observers includes instructions to observe the EC mandate and to
not interfere in election proceedings. The official CODEO training manual opens with explicit
8
instructions not to help any aspect of the voting processes. The manual’s first two rules and regu-
lations are that “An Observer shall not offer advice or give direction to or in any way interfere with
an election official in the performance of his or her duties” and “An Observer shall not touch any
election material or equipment without the express consent of the Presiding Officer at the polling
station or the Returning officer at the constituency center. Observers may not involve themselves
in the conduct of the election” (Coalition of Domestic Election Observers, 2012, p. 6). Observers
are trained to contact constituency-level CODEO supervisors if election materials, such as ballots,
are needed, and observers also record administrative or other irregularities on incident forms. In
2012, CODEO’s observers were trained to use SMS to report irregularities and disruptions to a
national data center. If an incident is serious, CODEO has communication structures in place to
immediately alert appropriate legal and security officials. CODEO also releases press statements
throughout election day and its Accra election headquarters serve as a major locus of public in-
formation about the process. Deliberate election malfeasance committed in front of a CODEO
observer is likely to be reported nationally and to elicit a speedy response by security forces.
During the voting process, the observer usually places himself at a distance from other
individuals allowed into the polling place. These include the presiding officer, a security officer,
and a representative designated by each of the major political parties, as well as those persons in
the process of voting. No one else is legally permitted to enter the polling station which, although
usually outdoors, are clearly demarcated.
Despite two decades of election observation, fraud was known to have occurred regularly
in elections in Ghana. Perhaps thanks to the very effectiveness of election observation during the
voting process, fraud appears to have been especially marked in the pre-election phase, which is
also observed by CODEO but less extensively. Implausibly large numbers of names appeared on
the voter rolls in the aughts (Oduro, 2012). Earlier experimental research in Ghana confirmed
this, and also identified spillover effects of CODEO observers on fraudulent registrations (Ichino
and Schündeln, 2012). Spillovers were interpreted to mean that political party operatives were
9
relocating fraudulent voter registration efforts to nearby polling stations when a CODEO observer
was present during the registration process. This suggests that party operatives are experienced in
reacting strategically to monitoring intended to reduce fraud in the electoral process.
Biometric voter registration and polling place identification processes were introduced by
the Electoral Commission for the concurrent parliamentary and presidential elections of 2012 in a
deliberate attempt to eliminate the irregularities and delays that had occurred in previous elections.
The entire electorate was reregistered using biometric markers (ten fingerprints) in a six-week pe-
riod in spring 2012. New voter identification cards were issued featuring head shots. Reregistration
was effective in identifying 8,000 double registrations, of which 6,000 were judged intentional
(Darkwa, 2013). Verification machines were delivered to all 26,000 polling stations in the country
prior to December’s election. The EC also purchased another 7,500 backup machines for use in
the event of equipment failure. Because the equipment is battery-operated, spare batteries accom-
panied each machine. Legal stipulations meant that only persons whose identities could be verified
biometrically would be permitted to vote on December 7.
Approximately 19 percent of polling stations experienced a breakdowns of the verification
machine at some point during the day, according to CODEO’s reports (Coalition of Domestic Elec-
tion Observers, 2013).1 Breakdowns appear associated with battery overheating and exhaustion;
when battery replacement was attempted, the machines froze up and operation was restored only
after a minimum of two hours. Breakdowns thereby delayed voting. By noon, Ghana’s President,
John Dramani Mahama, appealed to the Electoral Commission to allow individuals with valid voter
ID cards to vote at polling stations where biometric verification machines were not functioning.2
The Electoral Commission rejected the proposal, instructing their local officials to permit voting
1In our sample, we find machine breakdowns in 25 percent of polling stations but in 17 percent of polling stationswith a CODEO observer. The CODEO figure reflects information collected only from the latter, so our sample resultis approximately the same as the national figure for observed polling stations.
2“Let people with valid IDs vote; verification or not — Prez Mahama,” myjoyonline.com, (2012, December 7,15:33 GMT), http://politics.myjoyonline.com/pages/news/201212/98391.php; accessed 4 June 2014.
10
to continue into a second day where necessary. This occurred at a handful of polling stations.
Breakdowns mainly caused delays in voting that did not require extension of the electoral process.
4 Research Design, Sample Selection, and Measures
In collaboration with CODEO, we randomly assigned election observers to 1,292 of Ghana’s
26,000 polling stations in the 2012 general elections. We collect data from these 1,292 stations
and from an additional randomly selected 1,000 control stations. We collect identical information
from polling stations with and without observers. (Details appear in Appendix B.)
4.1 Sampling and Treatment Assignment
We implement the project in four of Ghana’s ten regions.3 Almost half of the Ghanaian population
(46.5 percent) resides in our sampled regions. More relevant for the external validity of our study
is the fact that the party system is similar in the six regions other than those included. Although
the four regions where we sample were not selected to be statistically representative of the en-
tire country, we have no reason to believe results would differ significantly had our sample been
national.
We randomly sample 60 (out of 122) political constituencies from the four regions.4 We
construct the sample as follows. First, each region is assigned a target number of sample con-
stituencies based on its proportion of the total 122 constituencies.5 Since each region’s number of
electoral constituencies is determined by the Electoral Commission on the basis of population, this
means the number of constituencies included in the sample from each region makes the regional
sample proportional to population.
3For logistical reasons, we sample only in the south of the country. We exclude the Greater Accra region, thelocation of Ghana’s capital, because we anticipated that international election observers might focus on the easy-to-reach polling stations there and that their presence could contaminate the treatment.
4Sample size was determined on the basis of power calculations and logistical constraints.5For example, the largest region we study is Ashanti, which has 47 constituencies, or about 38 percent of the 122
total. We sample 23 constituencies in Ashanti: 23 is approximately 38 percent of the total sample size of 60.
11
To select constituencies within regions, we block on electoral competitiveness and urban-
ization. We construct a sample with roughly equal numbers of constituencies that vary on these
characteristics. We block on electoral competitiveness because we hypothesized that election fraud
would vary with competitiveness. To generate our indicator of constituency-level electoral com-
petition, we use data from the prior (2008) presidential elections. We define a constituency as
competitive if the vote margin between the top two presidential candidates was less than 10 per-
cent and as uncompetitive otherwise. Constituencies that experienced alternations in the party
winning a majority in the 2008 presidential elections had a 2004 average margin of victory of 12
percent. Therefore, a 10 percent margin is, in the context in which we operate, easily reversible.
We block on urbanization because of the hypothesized relevance of polling stations density to the
strategic relocation, or spillover, of malfeasance; spillover is not a focus of the present paper, how-
ever. We code urban and rural constituencies using a measure of polling station density. We define
as urban those constituencies with a higher-than-the-median number of polling stations per square
kilometer (where the median in our sample is 0.14 polling stations per square kilometer) and rural
as those with lower-than-the-median.
Constituency sampling was performed as follows. Within each region, constituencies are
coded as competitive/stronghold and as urban/rural. We select a random sample of constituencies
from each of these four possible combinations (competitive-urban, competitive-rural, stronghold-
urban, stronghold-rural) such that the total number of constituencies sampled from each region
equals its target number. To the extent feasible, we sample equal numbers of constituencies within
regions from each of the four conditions.6
Our units of analysis are individual polling stations, which are nested within the 60 con-
stituencies in our sample. We randomly sample 30 percent of the polling stations in each con-
stituency. We then randomly assign each polling station to either treatment (observer) or control
(no observer). Appendix B further details the experimental design. In Appendix C, we provide ev-
6In some regions, equal numbers of competitive and stronghold constituencies do not exist, narrowing our choices.
12
idence that treated and control polling station areas are comparable across developmental, political
and ethnic characteristics.
In our analyses, we report average treatment effects of election observers.7 Our underly-
ing research design allows us to account for spillover effects when estimating the causal effect
of observers. Spillovers occur when parties shift fraud to control polling stations in response to
an election observer (Ichino and Schündeln, 2012). To study spillover, we implement a random-
ized saturation design (Baird et al., 2014), which assigns varying proportions of polling stations
to treatment in different constituencies. We saturate the constituencies at three rates: in the low
condition, 30 percent of the polling stations in the constituency sample is assigned to treatment;
in the medium condition, 50 percent; and in the high condition, 80 percent. Differences in satu-
ration are used to study spillovers effects.8 In this paper, we do not study spillover effects but (in
Appendix B) we report estimates that incorporate spillover effects. The results there confirm that
incorporating spillover does not change the direct effects that we report in the body of the paper.
The direct results are more easily interpretable than results that incorporate spillover, which is why
we place the latter in an appendix.
4.2 Measuring Machine Breakdown
We gathered data at treated and control polling stations on election day. Enumerators gathered
polling station level election results and completed a questionnaire that CODEO observers use to
report activities at their assigned polling stations. The questionnaire is reproduced in Appendix A.
The questionnaire included the question “Did biometric verification machine fail to function prop-
erly at any point in time?”9 Possible responses were “Yes”/“No.” We use this information to
measure breakdowns of biometric verification machines. The structure of the question allows us
7The project experienced no issues with compliance in administering treatment.8For details on the randomized saturation design used and the spillover impact of observers on fraud, see Asunka
et al. (2015); Baird et al. (2014). We detail the design in Appendix B.9The questionnaire also asked whether a biometric verification machine was present at the polling station. Ten
polling stations in our sample did not have machines, and we drop these from the analysis.
13
to code every polling station in our sample (treated and control) for whether breakdown of biomet-
ric equipment occurred. We do not know if machines broke down repeatedly in the same polling
place, how long breakdowns lasted, why they occurred, or what was done about them.
Enumerators gathered information on machine malfunction in treatment and control areas
as follows. At treated stations, CODEO observers collected the information we analyze as part
of their official assigned activities. Accordingly, at treated stations, data is a product of direct
observation on the part of the data collector. At control stations, data was collected by enumerators
who interviewed multiple people — party agents representing the two major political parties or
presiding officers — after the polls closed. (Each political party is allowed to designate an official
representative as a party observer; that individual is permitted to remain in the polling station for
election day.) To avoid “observing” control stations, we could not send enumerators to control
stations during the election process itself. Instead, using a data collection instrument identical to
that used by CODEO observers, enumerators collected information from party agents after the
close of the polls. Our enumerators were provided identical training as CODEO observers and
were accredited by the Election Commission as observers. They were thereby permitted to enter
polling places.
This variation in the data collection processes raises the concern that it may drive the ob-
served causal relation that we report between election observers and machine failures. We have
four reasons to believe our data is valid despite the differences in data collection between treatment
and control stations.
First, reporting differences are likely to bias results against finding that rates of machine
malfunction are lower at treated than control stations. CODEO observers are trained to document
all irregularities that occur at their assigned stations, where they remain for the entire day. Official
observers therefore seem more likely than party representatives to assiduously document events
such as machine malfunction; in addition, the former record events in real time whereas our enu-
merators asked the latter persons to recollect events that had taken place during the preceding ten to
14
twelve hours. We expect that CODEO observers would thereby record more violations of election
integrity than party agents would report. Instead, our data report rates of machine breakdown that
are twice as high in control stations as in those observed by CODEO.
Second, reporting differences cannot explain the result (presented below) that the extent
of parliamentary electoral competition is strongly associated with machine breakdown. Observers
are randomly assigned within constituencies, and all sampled constituencies have both treated and
control polling stations in them. As a result, data gathered by enumerators at control stations should
not be correlated with constituency-level variables. In addition, the relationship between electoral
competition and machine breakdown is almost identical when we subset the sample into treated and
control stations: thus, differences in data collection methods between treated and control stations
do not affect this finding.
Third, in Appendix D, we verify the entire analysis using a reduced dataset drawn from
only those control polling stations where the enumerators collected identical information from at
least two different individuals. (In the main analysis reported in this paper, we include data from
control polling stations that was collected from a single individual in instances where it proved
impossible to collect data from a second person.) The two respondents were usually affiliated with
different major political parties. We show that even using this more conservative dataset, the results
reported in the paper continue to hold.
Fourth, in Appendix F, we document that spillover effects of election observers on the
breakdown of biometric verification machines increases with the saturation of observers in the
constituency. Spillover is measured by assessing differences in rates of breakdown only at control
polling places in constituencies with different observer saturations. Data was collected at control
polling stations by enumerators who all used identical data-collection methods. When we examine
only spillover, as a result, the data is not affected by the differences in data collection methods
that necessarily characterize treatment and control polling stations. If markers of election fraud
rise with observer saturation, this provides indirect confirmation that the measure of fraud is valid.
15
For instance, election observers may reduce negligence and incompetence as well as malfeasance.
However, observers are unlikely to displace poor electoral administration onto nearby polling sta-
tions. Therefore, we argue that evidence of spillover onto nearby polling stations also constitutes
evidence of deliberate displacement of a certain fraud technology. Our finding of spillover thereby
validates our data collection method.
During the course of the election, observer missions reported that they “found no reason to
suspect that the breakdown of the biometric identification mechanism was deliberate” (Economic
Community of West African States, 2012, p. 6). Such reports were necessarily drawn from polling
stations where observers were sent. Our study collects data from unobserved polling stations in
addition to those under observation, and our data are therefore more valid generally than reports
that rely on information only collected at observed stations. We interpret out findings as strongly
suggestive that the breakdown of biometric identification machines was in fact deliberate.
4.3 Measures of Election Fraud
We construct indicators of election fraud that rely on objective information gathered from sample
polling places on election day. By law, ballots must be counted in public at each polling station
after the polls close. This makes it possible to collect polling station level information before it is
aggregated (and potentially tampered with) at higher levels.
We construct three measures of fraud, two of which — the overvoting rate and ballot
stuffing — draw on measures created in Asunka et al. (2015). Registry rigging is original to this
paper. Our first measure, the overvoting rate, is the number of votes cast in the presidential race that
is above the number of voters officially registered at the polling station. We convert this measure
into a share by dividing by the number of registered voters, taken from the Electoral Commission’s
(EC) official figures. Each voter is legally allowed to vote only at the polling station where they
registered. Overvoting is a marker of potential fraud since it suggests that unregistered voters cast
ballots, that double voting occurred, or that vote counts were artificially inflated in some other way.
16
The measure of overvoting that we construct uses data collected at our sample polling stations on
the numbers of valid votes cast in relation to official Electoral Commission figures on the number
of registered voters at each polling station. The latter figures were released prior to election day.
The number of valid votes cast in each polling station is reported on an official form that is filled
out at the close of day; we collected these figures in sample polling stations. The rate of overvoting
in our 2,310 polling stations ranges from 0.3 percent to more than 250 percent. Of the sample
polling stations, 1,845 reported vote totals that did not exceed the number of registered voters, and
are therefore coded zero for overvoting.10
Our second measure of fraud is what we call registry rigging. The official number of
registered voters reported by the Electoral Commission and the number of voters on the paper
rolls at the electoral stations differed in many cases. Discrepancies may arise from the delivery
of false voter rolls to the polling stations or from deliberate manipulation of the voter rolls by
persons at the polling station.11 The differences were unusually common in polling stations where
we identify overvoting, and thus we expect that the local figures will be higher than the Electoral
Commission figures in control stations in order to facilitate fraudulent excessive voting. Indeed, in
control stations the local voter rolls are on average 21 voters larger than the Electoral Commission
figures, while in treatment stations the local voter rolls are only 2 voters larger. The difference
between these means has an associated p-value of 0.004 when accounting for clustered standard
errors.
However, we are unsure of the exact mechanism that leads to variations between Electoral
Commission figures and the local voter rolls. Thus, we code any numerical difference between
Electoral Commission figures and those reported by the polling station as registry rigging. We
10Furthermore, one polling station in the Ho West constituency was coded as zero for overvoting even though turnoutthere appeared over 700 percent. A clear outlier in terms of turnout, this polling station received hundreds of votersfrom a near by polling station after the biometric machine broke down there. Not making this correction increasesthe magnitude of the treatment effect and shrinks the p-value; therefore the results of this paper are not driven by thisdecision.
11Overvoting may capture some of this fraudulent behavior, but often turnout will not exceed 100 percent.
17
code any difference between the two as a one; matching figures from the Electoral Commission
and the local rolls are coded zero. In order to account for the possibility that small differences
between the Electoral Commission figures and the figures on the paper registries are not indica-
tive of the intent to commit fraud but instead are the product of minor administrative errors, we
also demonstrate in the results section that our results are robust to a range of cutoffs for coding
registry rigging as a one. We show that coding any discrepancy in the number of registered vot-
ers as registry rigging produces results similar to limiting the measure to cases where the figures
differed by more than 50. Furthermore, we reproduce our analyses using the continuous measure
of distance between the number of registered voters according to the local registry and the official
Electoral Commission figure in Appendix G. Because the continuous nature of this variable allows
intentional, administrative, and measurement errors to all inflate it, it exhibits high variation. As a
result, the results reported in Appendix G are consistent with the rest of our findings but are less
precisely estimated.
Registry rigging, while intimately related to overvoting, captures a slightly different mech-
anism. It highlights the willingness of local officials to change the voter registry rather than indi-
cating a fraudulent outcome (namely, more ballots cast than registered voters). As a result, while
78 percent of polling stations with overvoting were also locations where there was registry rigging,
only 15 percent of the locations where there was registry rigging exhibit overvoting. We could in-
terpret this difference as suggesting more frequent intentions to commit fraud than we capture with
our measure of overvoting.12
Our third fraud measure captures whether the presidential ballot box appears to have been
stuffed. The ballot stuffing measure takes a value of one if more ballots were discovered in the
ballot box than the number of voters known to have cast ballots and zero otherwise. The data were12Importantly, both overvoting and registry rigging use the number of registered voters reported by the Electoral
Commission as a benchmark. This number was fixed before the election took place and observers were randomlyassigned to stations. Therefore, as a validation of both of these measures and this benchmark figure, we test for anytreatment effect of the observers on the number of registered voters according to the Electoral Commission. Thep-value for the difference in means is 0.99, validating this figure as a benchmark.
18
collected using the questionnaire; enumerators responded “Yes”/“No” to the question “Were more
ballot papers found in the presidential ballot box than voters who cast ballots?” (See item CE on
the questionnaire in Appendix A.) Because votes are counted at each polling station in public at
the end of the day, enumerators had direct access to this information.
We analyze this suite of indicators separately because they capture different types of ir-
regularities but represent a similar underlying pattern of electoral malfeasance: attempts to alter
the electoral outcome through election day vote rigging. While overvoting may entail differences
in the voter rolls (captured by registry rigging), they are weakly correlated (r = 0.30). Registry
rigging may capture attempts to overvote that did not push turnout over 100 percent. Nor are over-
voting and ballot stuffing correlated in our sample of polling stations (r = 0.03). These two types
of irregularities almost never occurred in the same polling stations, appearing instead as substitute
types of fraud. One possible explanation for this is that these two irregularities may have been
committed by different types of individuals and in different ways. Overvoting may have occurred
with the complicity of the presiding officer, when individual registered voters were permitted (or
encouraged) to vote more than once, perhaps as party activists escorted them back into line after
they had voted. Ballot stuffing, by contrast, may have occurred when the presiding officer was
inattentive (either deliberately or when distracted), allowing others on the scene to add more bal-
lots to the box.13 The data reported below show roughly equivalent numbers of polling stations
experienced overvoting and ballot stuffing in our sample.
The measures of election fraud we use are, strictly speaking, relevant exclusively to the
presidential election. Our data collection did not include information that allows us to construct
measures of fraud specific to the parliamentary races that were simultaneously underway. Our
theory of election fraud posits that the extent of competitiveness — which in Ghana varies across
13These two types of irregularities were grouped together by the NPP when the party petitioned Ghana’s SupremeCourt to nullify the election results in a lengthy post-election court case. The NPP labeled both “overvoting.” Althoughthe petition was ultimately rejected, four of the nine Supreme Court justices ruled that the NPP’s petition was valid,suggesting that our measures of fraud are broadly in line with what legal authorities in Ghana believe true.
19
constituencies for the parliamentary but not for the presidential races — affects the likelihood that
political parties engage in fraud. We use the three measures of fraud in the presidential election
just described to proxy generally for the extent of election fraud at the polling station. We assume
that where incentives and opportunities were higher for political parties to commit election fraud
in the parliamentary races, they also committed more fraud in the presidential race. We justify this
with the reminder that votes for the presidency mattered equally regardless of the parliamentary
constituency where they originated.
4.4 Measuring Electoral Competition in the Parliamentary Elections
Our first hypothesis posits that electoral fraud will be concentrated in electorally competitive ar-
eas. Since Ghana’s president is elected in a single nationwide district, the incentives for parties to
win votes in the presidential contest are uniform across parliamentary constituencies. We there-
fore cannot test whether election competition affects election fraud at the level of the presidential
race.14 We instead test this hypothesis with data on electoral competition in constituency-level
parliamentary elections. We label this variable margin.
The rationale behind margin is as follows. Political parties in Ghana, as elsewhere, seek to
maximize seats in the legislature, as well as to win the presidency. If electoral competition creates
incentives for fraud, we are likely to observe more fraud in electorally competitive parliamentary
constituencies. Political parties in Ghana are organized hierarchically, with relatively independent
constituency-level organizations guiding campaign operations (Osei, 2012). These constituency-
level organizations are in the first instance creatures of the member of parliament. The degree of
electoral competition in the parliamentary elections is therefore the primary influence shaping the
incentives of constituency-level party organizations to commit fraud.
14The blocking variable for electoral competition is derived from the prior presidential race, and is therefore aninvalid indicator of the incentives parties face to commit election fraud.
20
We create a continuous measure of the parliamentary vote margin to capture the degree of
electoral competition in each constituency. Recall that the measure of electoral competition used
in blocking was drawn from the prior (2008) presidential not parliamentary race. In Ghana, there
is very little ticket splitting and a party’s margin in the parliamentary race is almost identical to its
constituency-level vote share for the presidency. For the NDC, the correlation in parliamentary and
presidential vote shares across constituencies in 2012 is 0.98 and for the NPP it is 0.97. Although
it hardly matters empirically whether we use the parliamentary margin or presidential vote share,
our theory of incentives for fraud involves parliamentary races. We use results from the prior
(2008) parliamentary elections in Ghana and calculate margin as the difference in the vote shares
between the first- and second-place candidates in each constituency. Smaller values on the margin
variable indicate higher levels of electoral competition, while higher values indicate the reverse.
The average vote margin in the sampled constituencies is 0.31 percent and the median is 0.19.
The constituencies in our sample display a large range of values on this variable, verifying that
parliamentary competitiveness is highly variable. There are a noticeable number of constituencies
in our sample where the margin in 2008 was close to zero, indicating extremely tight parliamentary
races.
5 Results
Table 1 presents descriptive information about direct rates of machine breakdown and the three
measures of electoral fraud in our experimental and blocking conditions. We report means and
standard deviations. As the data presented in the first row documents, a quarter of the polling
stations in our sample experienced machine breakdowns. The data reported in columns 2 and
3 provide preliminary evidence about the effects of election observers. Machine breakdown oc-
curred at 17 percent of polling stations with a CODEO observer present but at 35 percent of those
without an observer. This is a very large fraction of polling stations and implies an increase in
21
the rate of breakdown of around 100 percent when an election observer was not present. The re-
maining columns present rates of machine breakdown in blocking environments: competitive and
uncompetitive as well as urban and rural constituencies. We find that machines break down more
frequently in electorally competitive constituencies: 28 percent of polling stations in competitive
areas experience machine breakdown compared with 20 percent in uncompetitive constituencies.
(To repeat, here competitiveness is a dichotomous blocking variable drawn from 2008 presidential
not parliamentary election data; see above.) Rates of breakdown are also 3 percentage points higher
in urban than in rural areas. The data describe an election in which biometric verification machines
broke down more often when an election observer was not present, in competitive constituencies
rather than party strongholds, and in urban rather than rural areas.
Table 1 about here.
The next three rows present the same information for the three indicators of fraud that we
analyze. Registry rigging, which we analyze as a proxy for the intent to commit fraud, characterizes
fully 18 percent of the full sample, and the rate is double at polling stations without an observer
compared with those with an observer. However, objectively fraudulent outcomes — proxied by
overvoting and ballot stuffing — was much less common, occuring at only 3 to 4 percent of polling
stations. These too were more likely to occur in the absence of an election observer. Overvoting,
ballot stuffing, and registry rigging were each more likely to occur in competitive than in stronghold
constituencies and registry rigging is noticeably more common in urban than rural locations. The
data on fraud is thus consistent with our hypotheses that fraud increases with election competition
and when election observers are not present.
5.1 Experimental Results
We now present results of analyses that come directly from the experimental design. In Table 2,
we report results when we examine the effect of electoral observers on machine malfunction and
the three indicators of fraud. We present average treatment effects (ATE). We use OLS regression
22
analysis in order to incorporate our blocking variables as covariates and display easily interpretable
quantities.15 (For details on the design and for parallel results that also incorporate spillover, see
Appendix B.)
Table 2 about here.
For each outcome variable — machine breakdown, overvoting rate, registry rigging, and
ballot stuffing — we report two specifications. The first column contains the unadjusted ATE
effects while the second column adjusts for blocking covariates. Both columns report specifications
that use robust standard errors clustered at the constituency level.16 The results with or without
the blocking covariates are substantively identical. Column 1 shows that election observers have
a negative and statistically significant impact on machine breakdown. The size of the effect is
unaltered when we incorporate blocking variables into the model, as shown in Column 2. Rates
of machine breakdown are consistently less where an election observer is present. The estimated
average treatment effect is very large, with observers reducing rates of machine breakdown by
18 percentage points (and by 11 percentage points when spillover is taken into consideration; see
Table B.1 in Appendix B).
Similar to results reported in Asunka et al. (2015), columns 3 through 6 show that election
observers have a negative and statistically significant impact on all three measures of election
fraud. Columns 3 and 4 show that the rate of overvoting is 3.6 percentage points lower in observed
polling stations and Columns 4 and 5 show that the prevalence of registry discrepancies is about
12 percentage points lower in observed polling stations.17 Lastly, while observers have a negative
15In Appendix E, Table E.1, we report logistic regressions on the three dichotomous outcomes—breakdown, rig-ging, and ballot stuffing. Results are substantively unchanged.
16Using nonrobust standard errors yields the same results, although the p-values are smaller, the depressive effectof observers on ballot stuffing becomes significant, and the effect of competitive polling stations on markers of fraudis more clearly positive. This specification can be found in Appendix E, Table E.2.
17The reported operationalization of registry rigging defines any difference between the number of registered votersreported by the Electoral Commission and the rolls at the polling stations as intent to commit fraud. However, it ispossible that some discrepancies arise due to simple transcription errors or minor modifications to the voter rolls. Forthat reason we recode rigging as one when the difference between the Electoral Commission and the polling stationfigures is greater than or equal to some cutoff, ranging from one, as it is now, to 500. For example, a cutoff of 100would indicate that only when the two figures differ by greater than or equal to 100 do we indicate registry rigging.
23
impact on incidences of ballot stuffing, this result is not statistically significant. Overall, there is
some evidence that election observers reduce fraud, and the size of the effect is particularly large
for registry rigging.
The effect of election observers on biometric machine breakdown and the three markers of
fraud is causally identified as a result of the random assignment of observers to polling stations.
However, because we collected data on a wide variety of outcomes and have tested several of
them, it is possible that the statistical significance reported above is an artifact of the number of
specifications we have run. To ensure that our results are not due to multiple comparisons, we
use a Holm correction to adjust the p-values. First, we collect over 43 outcomes from the survey
instrument (replicated in Appendix A), including some of our own construction. Then we run OLS
without blocking covariates and with robust clustered standard errors at the constituency level.
Finally, we collect and order the p-values. The analysis requires that we inflate the p-values by
dividing by the total number of comparisons made minus the rank of the p-value. In essence, this
penalizes the most significant result the most, and the less significant results to a lesser degree. This
method strictly dominates the simpler Bonferroni correction (Aickin and Gensler, 1996) when it
comes to ensuring that the familywise error rate, or the probability of false discovery, is at most an
acceptable level of error, usually specified as 0.05.
Figure 1 depicts the results of this correction. Highlighted on the y-axis are the key vari-
ables of interest and on the x-axis their associated p-value, both before and after correction. None
of the results of our main analysis in Table 2 are changed and all remain significant at the 0.05
level.18 Furthermore, the results are also robust to using the more stringent Bonferroni correction.
For a full list of outcomes included for this correction, see Appendix H. This provides ample and
As reported in Figure G.1, the effect of observers on rigging is negative and statistically significant at the 0.05 levelwhether the cutoff for fraud versus administrative error is coded as any difference (1), to a large difference (357).
18We have added a dichotomous version of overvoting that is coded a one if overvoting is greater than 0, and a zerootherwise. I also include turnout to show that there is a negative treatment effect of observers on turnout, indicatingthat turnout was probably artificially inflated in control stations.
24
robust evidence of a causal depressive effect of election observers on biometric machine break-
down as well as the suite of fraud measures that we employ.
Figure 1 about here.
5.2 Non-Experimental Results
We now examine tests of our hypotheses related to electoral competition and party organization.
In Column 2 of Table 2, we see that competitive constituencies are positively associated with ma-
chine breakdown. In our sample, polling stations in competitive constituencies are 7.8 percentage
points more likely to experience machine breakdown than polling stations in party strongholds,
which is how we conceptualize uncompetitive constituencies. Furthermore, the estimated effect of
competitiveness on the three other markers of fraud is consistently positive although not statisti-
cally significant. However, the competitive blocking variable is dichotomous and constructed using
the 2008 presidential election. We also created a continuous measure of competitiveness, margin,
using the 2008 parliamentary election. This variable more accurately captures the incentives that
parties face (we discuss this further in Section 4.4). Table 3 contains identical specifications to
those reported in Table 2 but uses the continuous margin variable instead of the competitive block-
ing variable. Results are largely the same; an increase in the margin of victory is related to a
decrease of fraud using any of the four indicators, including machine breakdown. Margin is now a
statistically significant predictor of less registry rigging. The results reported in the table are con-
sistent with the hypothesis that competitive constituencies provide incentives for parties to commit
electoral fraud.
Overall, these results document that election observers have a causal effect in reducing
the breakdown of biometric verification machines. We also find that where observers were present,
there was significantly less tampering with the voter registry and a larger likelihood that the official
EC figures on the numbers of voters corresponded to the paper registry on site. Our data do not
allow us to conclusively spell out the causal mechanisms in play, but our results are consistent with
25
the view that party activists deliberately altered the voter registry more often in polling stations
without a CODEO observer and then deliberately interfered with the operation of the biometric
identification machine. When the machine failed to function, voting continued, but on the basis of
the rigged registry, resulting in overvoting rates above 100 percent.
6 Machine Malfunction and Fraud
There is no administrative or technical reason that election observers should have any significant
impact on the operation of biometric verification machines. CODEO observers were not instructed
in the use of the machines nor were they expected to ensure their operation; indeed, as we have
already indicated, they were instructed not to touch or tamper with equipment in polling stations.
The results thus imply that some substantial fraction of machine breakdown — apparently about
half — was deliberately orchestrated when an election observer was not present.19 This raises
the question of whether machine breakdown was used strategically as an opportunity to commit
election fraud — by encouraging voters to double vote, for instance. To explore this, we next turn
to the effect of machine breakdown on the three markers of fraud.
Studying whether machine breakdown is a significant predictor of election fraud goes be-
yond the research design to explore important but unanticipated results. Our research was designed
to study the impact of election observers on electoral integrity. Election observation was random-
ized, whereas machine breakdown was not. We cannot know with certainty whether associations
that we observe between machine breakdown and other variables, such as proxies for election
fraud, are genuinely causal. Nonetheless, our data provide the opportunity to explore and under-
19If the breakdown rate with no observer present is about 35 percent, as indicated by the value of the constantreported in columns 1 and 2 in Table 2, and the breakdown rate with an observer present is about 17 percent, then theabsence of electoral observation about doubles the rate of breakdown. If we assume that all incidents of breakdownwhen observers were present were accidential, then the non-accidental frequency of breakdown is twice the randomrate.
26
stand further the unanticipated finding that breakdowns of the biometric verification machines were
systematically related to observer absence.
In Table 4, we report results of regressions that study the impact of election observers and
machine malfunctions on our three measures of voter fraud. (These results do not take spillover into
account but they do include potentially important control variables. The coefficients do not change
in magnitude or statistical significance in any meaningful way if control variables are omitted.)
In Column 1, we report OLS regression results for the overvoting rate and for its interaction with
machine breakdown. In Columns 2 and 3, we do the same with logistic regressions for registry
rigging and ballot stuffing, respectively.
Table 4 about here.
In all three specifications, the coefficient on machine breakdown is positively correlated
with the indicator of fraud. This relationship is statistically significant for registry rigging and
ballot stuffing although not for the rate of overvoting. Because this is an interactive model and the
coefficient on breakdown is the effect in the absence of observers, machine breakdown appears to
be strongly related to fraud when there is no election monitoring. Also, the coefficient on election
observers is negative and statistically significant for the overvoting rate and for rigging. There-
fore, even in the absence of machine breakdown, election observers appear to be important in the
reduction of these activities. Furthermore, although none of the interactive effects are statistically
distinguishable from zero, when we add them to the respective base machine breakdown term, the
effect of machine breakdown on all three markers of fraud becomes statistically indistinguishable
from zero. Analyzing machine breakdowns in observed polling stations would yield no statistically
significant relationship with markers of fraud. Only by including data from unobserved stations
does the relationship between breakdown and markers of fraud become apparent.
Jointly, these results demonstrate that both machine breakdown and election observers have
important effects on electoral fraud in Ghana. Observers directly reduce fraud even when there is
27
no machine breakdown, and they also seem to depress the strong relationship between machine
breakdown and fraud.
To clarify these somewhat complex relationships, Figure 2 shows the same interactions
graphically using bootstrapped means by group. The figure is broken into three panels, one for
each measure of fraud. The y-axis is the proportion of polling stations with fraud for rigging and
ballot stuffing; it is the mean level of overvoting for overvoting rate. The lighter line (with circles)
corresponds to the control polling stations that were unobserved while the darker line (triangles)
corresponds to the treatment — that is, to observed polling stations. In all three panels, the boot-
strapped mean level of fraud for unobserved stations is above the mean level for observed stations,
indicating that observers have a depressive effect on fraud independent of machine breakdown.
Furthermore, the slope of the lighter lines is steeper in all three panels (although almost impercep-
tibly in the overvoting rate panel). This indicates that when machines break down, fraud becomes
even more problematic in the absence of observers. For example, around 3 percent of all stations,
whether observed or not, exhibit ballot stuffing when there is no machine breakdown. When there
is machine breakdown, the percentage of stations exhibiting ballot stuffing jumps to around 12
percent when there is no observer but only to 5 percent when there is an observer at the station.
Figure 2 about here.
In summary, these results highlight that the worst outcomes consistently obtain when the
biometric verification machine breaks down and no election observer is posted at the polling sta-
tion. Election observers have independent effects in reducing fraud when there is a fully functional
biometric identification machine. The breakdowns of biometric identification machines are im-
portant for fraud, and breakdowns are most strongly related to fraud in the absence of election
observers.
Our experimental design allows us to state with confidence that the absence of election
observers is causally related to machine breakdowns. Our results also show that machine break-
downs permitted much higher levels of election fraud to occur when a domestic election observer
28
was not present than when one was present. These results underscore that machine breakdown is
likely to have been deliberately induced, especially when no election observer was posted to the
polling station, and also that persons on the scene took advantage of breakdowns to commit fraud,
especially when an observer was not present.
7 Interpretations and Conclusions
This paper investigates the malfunction of biometric identification machines during Ghana’s 2012
presidential and parliamentary elections. We document non-random patterns to breakdown. Using
a randomized experiment, we find that biometric identification machines broke down significantly
more often and at very high rates when an election observer was not present. These results suggest
that the operation of biometric verification machines was in many instances deliberately induced.
Our results also show that fraud was more prevalent where biometric identification machines failed
to operate. Machine breakdowns appear to have been used strategically to increase overvoting and
ballot stuffing. They occurred more frequently where registry rigging had laid the groundwork,
since it more easily permitted voters to vote more than once or permitted persons not legally reg-
istered to vote. These processes resulted in turnout rates above 100 percent, which we capture
with the measure of overvoting. We also find, consistent with a theory of fraud in the context of
democratic political competition, that machines were significantly more likely to break down in
constituencies that were more electorally competitive for the parliamentary seat. This corroborates
that fraud increases with electoral pressures on political parties.
We can speculate about how machine breakdowns occurred and how this permitted election
fraud to occur. There was a “natural” rate of breakdown, which appears to have been under 20
percent. This was the rate of breakdown when an election observer was present. It probably
was chiefly due to battery exhaustion, where unfamiliarity with new equipment made it difficult to
keep the machines operational. The additional breakdowns that took place when election observers
29
were not present may have been deliberately induced by pilfering spare batteries, by exposing the
machine to excessive heat or sunlight, or by rending any available backup machine non-operational.
The occasional machine may have been stolen outright.20 Breakdowns may also have been induced
when presiding officers exhibited (perhaps strategically) unfamiliarity with the machines, despite
the fact that temporary technical staff from the EC was supposed to be on site to keep the machines
operating. Machine breakdowns could have led to confusion in the polling station, permitting
ballot stuffing to occur as presiding officials were distracted trying to restore equipment. More
frequently, it seems that breakdowns led some polling station officials to allow voting to continue,
despite strict instructions from the EC to the contrary; this was extensively reported by the media
even on election day.21 In the quarter of polling stations without a CODEO observer and where the
paper registry had been tampered with, this facilitated double voting and voting by unregistered
individuals.
Aware of some of the problems that occurred in 2012, Ghana’s Election Commission sub-
sequently upgraded the biometric machines. The subsequent (2014) upgrade included program-
ming the machines to warn when the batteries were running out.22 Much of the fraud in the 2012
elections appears to have occurred when the batteries were exhausted and the machines froze up,
making reprogramming the biometric verification machines a potentially successful prevention
technique. The extent of electoral fraud that our research shows was associated with biometric
machine failure in 2012 is unlikely to be repeated in the future.
Because biometric identification was used in every polling station in the December 2012
elections, we are unable to assess whether its introduction reduced electoral fraud. However, this is20Reported in “Two Verification Machines Stolen in Tamale Central Constituency,” Ghana Votes 2012, 15:58 De-
cember 8, 2012; available at http://ghvotes2012.com/reports/view/133, accessed October 16 , 2014.21For instance, “Election 2012: Voting still ongoing despite 5:pm deadline,” radioxyzonline.com,
General News of Friday, 7 December 2012, http://www.ghanaweb.com/GhanaHomePage/NewsArchive/Election-2012-Voting-still-ongoing-despite-5-pm-deadline-258814 and “Nadowli-Kaleo electoral of-ficers defy EC’s ‘no verification, no vote’ rule,” myjoyonline.com, Politics of Friday, 7 December 2012, http://politics.myjoyonline.com/pages/news/201212/98388.php.
22“EC Upgrades Biometric Verification Machines,” GhanaWeb, 5 April 2014, http://www.ghanaweb.com/GhanaHomePage/NewsArchive/artikel.php?ID=305315.
30
likely to be the case. The fact that twice as many biometric identification machines did not operate
uninterruptedly during the election in polling stations without observers suggests that malfunction
was deliberately induced. Why would this occur if not to commit election fraud? The utility of
functioning biometric identification machines in fraud prevention provides the incentive for indi-
viduals to sabotage their operation when no election observer is present. If biometric identification
machines did not reduce fraud, we would not observe a non-random pattern of breakdown or a
significant association of machine breakdown and election fraud. However, this inference goes
beyond what our research was designed to investigate.
Our study cautions that biometric technology is susceptible to manipulation, especially in
an initial large scale rollout and even in a genuinely competitive democracy. In this context, break-
down may be deliberately induced when machines are not monitored by neutral, trained election
observers. The overall legal and political environment is sufficiently relaxed that political party
operatives apparently feel free to take advantage of unmonitored voting to tamper with new and
imperfectly designed equipment. These results carry general implications for the use of biometric
identification technology. Introduction of such equipment reduces fraud, even if we cannot esti-
mate how much fraud is prevented. However, it remains important to use the technology under the
watchful eyes of independent, non-partisan and neutral observers who have no interest in perpet-
uating fraud and who are professionally committed to the practices of good governance. There is
no technical fix to election fraud.
31
References
Aickin, Mikel and Helen Gensler. 1996. “Adjusting for Multiple Testing when Reporting ResearchResults: The Bonferroni vs Holm Methods.” American Journal of Public Health 86(5):726–28.
Asunka, Joseph, Sarah Brierley, Miriam A. Golden, Eric Kramon and George Ofosu. 2014. “Pro-tecting the Polls: The Effect of Observers on Election Fraud.” Unpublished paper.
Asunka, Joseph, Sarah Brierley, Miriam A. Golden, Eric Kramon and George Ofosu. 2015. “Pro-tecting the Polls: The Effect of Observers on Election Fraud.” Unpublished paper.
Bader, Max. 2013. “Do New Voting Technologies Prevent Fraud? Evidence from Russia.” Journalof Election Technology and Systems 2(1):1–8.
Baird, Sarah, Aislinn Bohren, Craig McIntosh and Berk Özler. 2014. Designing Experimentsto Measure Spillover Effects. Policy Research Working Paper, Development Research Group,Poverty and Inequality Team, 6824. The World Bank Washington, D.C.: .
Baland, Jean-Marie and James A. Robinson. 2008. “Land and Power: Theory and Evidence fromChile.” American Economic Review 98(5):1737–65.
Birch, Sarah. 2007. “Electoral Systems and Election Misconduct.” Comparative Political Studies40(12):1533–56.
Callen, Michael and James D. Long. 2015. “Institutional Corruption and Election Fraud: Evidencefrom a Field Experiment in Afghanistan.” American Economic Review 105(1):354–81.
Coalition of Domestic Election Observers. 2012. “Manual for CODEO Constituency Supervisors.December 7, 2012 Presidential and Parliamentary Elections.” Accra, Ghana: Training manual,CDD-Ghana.
Coalition of Domestic Election Observers. 2013. “Final Report on Ghana’s 2012 Presidential andParliamentary Elections.” Accra, Ghana: CDD-Ghana.
Cox, Gary W. and J. Morgan Kousser. 1981. “Turnout and Rural Corruption: New York as a TestCase.” American Journal of Political Science 25(4):646–63.
Darkwa, Linda. 2013. “Ghana’s Elections 2012: Some Observations.”.URL: http://forums.ssrc.org/kujenga-amani/2013/08/15/ghanas-elections-2012-some-observations/fn-468-5
Economic Community of West African States. 2012. “Observation Mission: Ghana 2012, Prelim-inary Declaration.”.
Enikolopov, Ruben, Vasily Korovkin, Maria Petrova, Konstantin Sonin and Alexei Zakharov. 2013.“Field Experiment Estimate of Electoral Fraud in Russian Parliamentary Elections.” Proceed-ings of the National Academy of Sciences 110(2):448–52.
32
Fearon, James D. 2011. “Self-Enforcing Democracy.” The Quarterly Journal of Economics126(4):1661–1708.
Fridy, Kevin. 2007. “The Elephant, Umbrella, and Quarrelling Cocks: Disaggregating Partisanshipin Ghana’s Fourth Republic.” African Affairs 106(423):281–305.
Gelb, Alan and Caroline Decker. 2012. “Cash at Your Fingertips: Biometric Technology forTransfers in Developing Countries.” Review of Policy Research 29(1):91–117.
Gelb, Alan and Julia Clark. 2013. Identification for Development: The Biometrics Revolution.Working Paper 315. Center for Global Development, Washington, D.C.: .
Gerber, Alan and Donald Green. 2012. Field Experiments: Design, Analysis, and Interpretation.New York: W.W. Norton.
Hyde, Susan D. 2007. “The Observer Effect in International Politics: Evidence From a NaturalExperiment.” World Politics 60(1):37–63.
Hyde, Susan D. 2010. “Experimenting in Democracy: International Observers and the 2004 Pesi-dential Elections in Indonesia.” Perspectives on Politics 8(2):511–27.
Hyde, Susan D. 2011. The Pseudo-Democrat’s Dilemma: Why Election Observation Became anInternational Norm. Cornell University Press.
Ichino, Naomi and Matthias Schündeln. 2012. “Deterring or Displacing Electoral Irregularities?Spillover Effects of Observers in a Randomized Field Experiment in Ghana.” Journal of Politics74(1):292–307.
Jain, Anil, Lin Hong and Sharath Pankanti. 2000. “Biometric Identification.” Communications ofthe Association for Computing Machinery (ACM) 43(2):90–98.URL: http://doi.acm.org/10.1145/328236.328110
Keefer, Philip. 2002. DPI2000: Database of Political Institutions: Changes and Variable Defini-tions. Technical report The World Bank Washington, D.C.: .
Kelley, Judith G. 2012. Monitoring Democracy: When International Election Observation Works,and Why It Often Fails. Princeton: Princeton University Press.
Lehoucq, Fabrice Edouard. 2002. Stuffing the Ballot Box: Fraud, Electoral Reform, and Democ-ratization in Costa Rica. New York: Cambridge University Press.
Magaloni, Beatriz. 2006. Voting for Autocracy: Hegemonic Party Survival and Its Demise inMexico. New York: Cambridge University Press.
Mares, Isabela. 2015. From Open Secrets to Secret Ballots: The Adoption of Electoral ReformsProtecting Voters Against Intimidation. New York: Cambridge University Press.
33
Molina, Iván and Fabrice Edouard Lehoucq. 1999. “Political Competition and Electoral Fraud: ALatin American Case Study.” Journal of Interdisciplinary History 30(2):199–234.
Morrison, Minion K.C. and Jae Woo Hong. 2006. “Ghana’s Political Parties: How Ethno-Regional Variations Sustain the National Two-Party System.” Journal of Modern African Studies44(4):623–47.
Muralidharan, Karthik, Paul Niehaus and Sandip Sukhtankar. 2014. Payments Infrastructure andthe Performance of Public Programs: Evidence from Biometric Smartcards in India. WorkingPaper. 19999, National Bureau of Economic Research Boston: .
Oduro, Franklin. 2012. Preventing Electoral Violence: Lessons from Ghana. In Voting in Fear, ed.Dorina A. Bekoe. Washington, D.C.: United States Institute of Peace.
Osei, Anja. 2012. Party-Voter Linkage in Africa: Ghana and Senegal in Comparative Perspective.Wisebaden: Springer.
Piccolino, Giulia. 2015. “Infrastructural State Capacity for Democratization? Voter Registrationand Identification in Côte d’Ivoire and Ghana Compared.” Democratization 22.
Przeworski, Adam. 2008. Self-Enforcing Democracy. In Oxford Handbook of Political Economy,ed. Barry R. Weingast and Donald Wittman. Oxford Handbooks of Political Science. New York:Oxford University Press.
Simpser, Alberto. 2013. Why Governments and Parties Manipulate Elections: Theory, Practice,and Implications. New York: Cambridge University Press.
Sjoberg, Fredrik M. 2012. “Making Voters Count: Evidence from Field Experiments about theEfficacy of Domestic Election Observation.” Unpublished paper.
Sjoberg, Fredrik M. 2014. “Autocratic Adaptation: The Strategic Use of Transparency and thePersistence of Election Fraud.” Electoral Studies 33(ß):233–45.
Ziblatt, Daniel. 2009. “Shaping Democratic Practice and the Causes of Electoral Fraud: The Caseof Nineteenth-Century Germany.” American Political Science Review 103(1):1–21.
34
Table 1: Descriptive Statistics of Machine Breakdown and Measures of Fraud(1) (2) (3) (4) (5) (6) (7)
Full Sample Treatment Control Competitive Stronghold Urban RuralMachine breakdown 0.234 0.172 0.354 0.276 0.201 0.249 0.216
(0.423) (0.378) (0.479) (0.447) (0.401) (0.433) (0.412)
Overvoting rate 0.027 0.015 0.051 0.035 0.022 0.024 0.032(0.194) (0.152) (0.255) (0.210) (0.182) (0.180) (0.209)
Registry rigging 0.177 0.134 0.255 0.198 0.163 0.192 0.161(0.382) (0.340) (0.436) (0.399) (0.369) (0.394) (0.368)
Ballot stuffing 0.043 0.032 0.062 0.056 0.033 0.043 0.042(0.202) (0.175) (0.242) (0.230) (0.179) (0.203) (0.201)
Observations 2047 1281 766 864 1183 1081 966
Notes: Standard deviations in parentheses. Ten polling stations without biometric verification machines removed from sample. Polling stations whereenumerators recorded different responses from two party officials on relevant variables are dropped. In Appendix D, Table D.1 we report the same table butonly include observations where enumerators recorded identical answers from both party officials.
35
Table 2: Average Treatment Effects of Election Observers on Machine Breakdown and Fraud
Breakdown Overvoting rate Registry rigging Ballot stuffing
(1) (2) (3) (4) (5) (6) (7) (8)
Election observer −0.182∗∗∗ −0.181∗∗∗ −0.036∗∗∗ −0.037∗∗∗ −0.122∗∗∗ −0.121∗∗∗ −0.031 −0.031(0.037) (0.038) (0.010) (0.010) (0.025) (0.026) (0.027) (0.028)
Competitive 0.078∗ 0.013 0.039 0.023(0.039) (0.009) (0.023) (0.019)
Urban 0.030 −0.008 0.026 0.001(0.036) (0.009) (0.022) (0.018)
Constant 0.354∗∗∗ 0.304∗∗∗ 0.051∗∗∗ 0.050∗∗∗ 0.255∗∗∗ 0.224∗∗∗ 0.062∗ 0.052(0.035) (0.039) (0.010) (0.012) (0.025) (0.028) (0.026) (0.027)
Observations 1,888 1,888 1,864 1,864 1,973 1,973 1,987 1,987R2 0.042 0.051 0.008 0.009 0.023 0.027 0.005 0.009
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. OLS with robust standard errors clustered by constituency in parentheses. Ten polling stations withoutbiometric verification machines removed from sample of machine breakdown. Polling stations where enumerators recorded different responses from twoparty officials on relevant variables are dropped. In Appendix D, Table D.2 we report the same table but only include observations where enumeratorsrecorded identical answers from both party officials. Logistic models for dichotomous outcome variables are reported in Appendix E, Table E.1. Results areunchanged in both alternative specifications.
36
Table 3: Effect of Election Observers and Competitiveness on Machine Breakdown and Fraud
Breakdown Overvoting rate Registry rigging Ballot stuffing
(1) (2) (3) (4)
Election observer −0.176∗∗∗ −0.036∗∗∗ −0.118∗∗∗ −0.030(0.038) (0.010) (0.025) (0.027)
Urban 0.024 −0.009 0.024 −0.001(0.036) (0.009) (0.022) (0.020)
Margin −0.157∗∗ −0.012 −0.088∗ −0.035(0.058) (0.017) (0.038) (0.023)
Constant 0.386∗∗∗ 0.060∗∗∗ 0.268∗∗∗ 0.073(0.052) (0.013) (0.028) (0.039)
Observations 1,888 1,864 1,973 1,987R2 0.053 0.009 0.028 0.008
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. OLS with robust standard errors clustered by constituency in parentheses. Ten polling stations withoutbiometric verification machines removed from sample of machine breakdown. Polling stations where enumerators recorded different responses from twoparty officials on relevant variables are dropped. In Appendix D, Table D.3 we report the same table but only include observations where enumeratorsrecorded identical answers from both party officials. The results are substantively the same.
37
Table 4: Interactive Effect of Machine Breakdown and Election Observers on Fraud
Overvoting rate Registry rigging Ballot stuffing
OLS OLS logistic
(1) (2) (3)
Machine breakdown 0.027 0.102∗ 1.102∗∗
(0.023) (0.044) (0.350)
Election observer −0.024∗∗ −0.093∗∗ −0.432(0.009) (0.033) (0.513)
Competitive 0.012 0.044 0.490(0.009) (0.023) (0.369)
Urban −0.004 0.024 −0.039(0.009) (0.022) (0.394)
Breakdown X observer −0.009 −0.098 −0.487(0.029) (0.050) (0.479)
Constant 0.033∗∗ 0.190∗∗∗ −3.357∗∗∗
(0.010) (0.033) (0.453)
Observations 1,745 1,830 1,848
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. OLS in Column 1 and logistic regression in Columns 2 and 3, with robust standard errors clustered byconstituency in parentheses. Ten polling stations without biometric verification machines removed from sample of machine breakdown. Polling stationswhere enumerators recorded different responses from two party officials on relevant variables are dropped. In Appendix D, Table D.4 we report the sametable but only include observations where enumerators recorded identical answers from both party officials. There is very little change, although the effectson overvoting grow smaller and the effect of machine breakdown on rigging is no longer significant.
38
Figure 1: Holm Corrected P-Values
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Machine breakdownRegistry rigging
Overvoting dummy
Overvoting rate
Turnout
Ballot stuffing
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0P−value
P−value type ● ●Traditional Holm−corrected
Note: Listed on the y-axis are the four focal outcomes, as well as turnout and a dummy variable for overvoting (1 if itexists, 0 otherwise). The dashed vertical line is at 0.01 and the solid horizontal line is at 0.05. The full list of outcomevariables can be found in Table H.1. 39
Figure 2: Bootstrapped Mean Levels of Fraud by Machine Breakdown and Observer
●
●
●●
●
●
0.02
0.04
0.06
0.08
0.10
0.15
0.20
0.25
0.30
0.35
0.05
0.10
Overvoting rate
Registry rigging
Ballot stuffing
No YesMachine breakdown
Pre
vale
nce
of fr
aud
Observer
●
No
Yes
Note: Machine breakdown is on the x-axis and the different colors and shapes indicate different observer statuses.On the y-axis are mean levels of fraud; this is the mean level of overvoting for overvoting rate and the proportionof polling stations with indicators of fraud for registry rigging and ballot stuffing. Standard errors and mean valuescomputed using 1,000 bootstrapped samples.
40
A Appendices
A Survey Instrument
41
Figure A.1: Data Collection Instrument Used in Sampled Polling Stations
CODEO (UCLA) OBSERVER CHECKLIST – 07 December 2012 Elections
Enumerator Name Constituency AFIGYA SEKYERE EAST Region ASHANTI Electoral Area AMANGOASE Polling Station Name WIAMOASE SAL ARMY JHS Polling Station Code F353001 ARRIVAL
AA Were any election officials present upon arrival? Yes (1) No
(2)
SETUP
AB At what time did voting commence? By 7:15 (1)
7:16 to 8:00 (2)
8:01 to 10:00 (3) After 10:00
(4) Never Opened
(5)
AC Were you permitted to observe? Yes (1) No
(2)
AD Was the polling station set up so that voters could mark their ballots in secret? Yes (1) No
(2)
AE Was the polling station accessible to persons with disabilities and the elderly? Yes (1) No
(2)
AF Was a polling agent present for NDC? Yes (1) No
(2)
AG Was a polling agent present for NPP? Yes (1) No
(2)
AH Which other political party agents were present? (Tick one or more)
None (0)
CPP (1)
GFP (2) NDP
(3) GCPP (4) PNC
(5) PPP (6) UFP
(7) URP (8) INDP
(9)
AJ Were security personnel present? Yes (1) No
(2)
None (0) Ballot Box
(1) Ballot Paper (2) Voters’ Register
(3) Indelible Ink (4)
AK Which items, if any, were missing (Tick one or more) Voting Screen
(5) Validating Stamp (6) Endorsing Ink
(7) Ink Pad (8) Tactile Ballot
(9)
AM Did the polling station have a biometric verification machine? Yes (1) No
(2)
AN Number of names on the voters register? (Enter zero if no voters register)
AP Number of names on the proxy voters list? (Enter zero if no proxy register)
AQ Number of presidential ballot papers? (Enter zero if none)
AR Were the presidential and parliamentary ballot boxes shown to be empty, sealed and placed in public view? Yes (1) No
(2)
VOTING
BA Were people’s biometric registration information verified? Yes (1) No
(2)
BB Did biometric verification machine fail to function properly at any point in time? Yes (1) No
(2)
BC Were people’s fingers marked with indelible ink before voting? Yes (1) No
(2)
BD Were ballot papers stamped with the validating stamp before being issued? Yes (1) No
(2)
BE Were any unauthorised person permitted to remain in the polling station during voting? Yes (1) No
(2)
BF Did anyone attempt to harass or intimidate voters or polling officials during voting? Yes (1) No
(2)
(For Questions BG, BH, BJ, BK and BM: None = 0, Few = 1 to 5, Some = 6 to 15 and Many = 16 or more)
BG How many people with Voter ID Cards were not permitted to vote? None (0) Few
(1) Some (2) Many
(3)
BH How many people were permitted to vote without voter ID cards? None (0) Few
(1) Some (2) Many
(3)
BJ How many people were permitted to vote whose names did not appear on the voters’ register? None (0) Few
(1) Some (2) Many
(3)
BK How many people were permitted to vote whose biometric details did not match? None (0) Few
(1) Some (2) Many
(3)
BM How many people were assisted to vote (blind, disabled, elderly, etc)? None (0) Few
(1) Some (2) Many
(3)
BN Were assisted voters allowed to have a person of their own choice to assist them to vote? Yes (0) No
(1) No Assisted
Voters (3)
BP Overall, how would you describe any problems that may have occurred during the voting process? Major (1) Minor
(2) None (3)
CLOSING & COUNTING
CA How many people were in queue at 5:00 pm? None (0) Few
(1) Some (2) Many
(3)
42
CB Was everyone who was in the queue at 5:00 pm permitted to vote? Yes (1) No
(2)
CC Was anyone who arrived at the polling station after 5:00 pm permitted to vote? Yes (1) No
(2) No One Arrived
After 5pm (3)
CD Did anyone attempt to harass or intimidate polling officials during counting? Yes (1) No
(2)
CE Were more ballot papers found in the presidential ballot box than voters who cast ballots? Yes (1) No
(2)
CF Did any polling agent request a recount of the presidential ballots? Yes (1) No
(2)
CG Did an NDC polling agent sign the declaration of results for the presidential election? Yes (1) No
(2) No NDC Agent
Present (3)
CH Did an NPP polling agent sign the declaration of results for the presidential election? Yes (1) No
(2) No NPP Agent
Present (3)
CJ Did any other polling agent present sign the declaration of results for the presidential election? Yes (1) No
(2) No Other Agents
Present (3)
CK Do you agree with the vote count for the presidential election? Yes (1) No
(2)
CM Did all polling agents present sign the declaration of results for the parliamentary election? Yes (1) No
(2)
CN Do you agree with the vote count for the parliamentary election? Yes (1) No
(2)
PRESIDENTIAL VOTE COUNT DA Spoilt ballot papers DH UFP (Akwasi Addai Odike)
DB Rejected ballot papers DJ PNC (Ayariga Hassan)
DC Total Valid Votes DK CPP (Michael Abu Sakara Forster)
DD NDC (John Dramani Mahama) DM INDP (Jacob Osei Yeboah)
DE GCPP (Henry Hebert Lartey)
DF NPP (Nana Addo Dankwa Akufo-Addo)
DG PPP (Papa Kwesi Ndoum)
Who was interview 1) Party Agent (specify)…………………………………………………..
2) EC Official 3) Security Personnel
_________________ ________________ __________ __________ ___________________ Enumerator First Name Surname Arrival Time Departure Time Signature
43
B Details of the Experimental Design
We implement a “randomized saturation” experimental design (Baird et al., 2014). The advantage
of the randomized saturation design is that it allows us to estimate the causal effect of election
observers while including in the estimates their potential “spillover” effects. Spillover effects
occur when the treatment status of one unit impacts outcomes at other units (Gerber and Green,
2012): in our case, when the deployment of an observer to one polling station influences election
integrity at other polling stations (because the observer “pushes” election fraud to unobserved
polling stations).
The design involves a two-stage randomization process: in our case, first at the constituency
level and then at the polling station level. In the first stage, we assign constituencies to an observer
“saturation” treatment. Saturation is defined as the proportion of polling stations within a con-
stituency that is monitored by observers. In the second stage, we randomly assign observers to
polling stations within the sample of constituencies.
In the first stage, we randomly assign each constituency to one of three saturations: low,
medium, and high. In the low condition, observers are deployed to 30 percent of sample polling
stations in the constituency. In the medium condition, we treat 50 percent of sample polling sta-
tions. In the high condition, we treat 80 percent of sample polling stations.23 In the second stage
of our randomization process, we randomly assign individual polling stations to treated (observed)
or control (unobserved) status. The proportion of polling stations randomly assigned to treatment
within a constituency is determined by the randomly assigned saturation level in the first stage.
The approach yields a 3× 2 experimental design. In total, we send observers to 1,292 polling
stations across 60 constituencies in the sample.
23In other research (Asunka et al., 2014), we seek to identify the spillover effects of observers on fraud in pollingstations that are not under observation. The estimation of spillover effects relies on comparisons of control units ineach of the three constituency level conditions. Since by definition there are relatively few control stations in thehigher saturation constituencies, we assign the constituency treatments with a probability of 20 percent for the lowcondition and 40 percent for the medium and high conditions. This increases the statistical power to detect spillovereffects. Such spillovers are not the focus of the present study.
44
In our experimental framework, potential outcomes are determined by the polling station’s
treatment status and the treatment condition of each station’s constituency. Potential outcomes can
be written as follows:
Yi j(Ti j,S j) (B.1)
where Yi j is one of the indices of election integrity (such as ballot stuffing or overvoting) at polling
station i in constituency j. Ti j indicates treatment status at polling station i in constituency j (Ti j = 1
if an observer is present, and 0 otherwise). The constituency level treatment status is indicated by
S j, where S j = s and s ∈ {low,medium,high}.
To account for spillover in our estimation of causal effects, we compare outcomes in treated
polling stations to outcomes in control polling stations in the low saturation constituencies. Since
the saturation of treatment in the low condition constituencies is relatively low, the control polling
stations in the low condition constituencies are less likely to be affected by spillover effects. Com-
paring outcomes in treated polling stations only to these low condition control stations should
therefore generate less biased estimates of observers’ causal effects.24 To estimate the average
treatment effect of election observers, we therefore define a dummy variable, Wi j, which takes a
value of 1 if the unit is a control polling station located in one of the medium and high saturation
constituencies (following Baird et al., 2014). To estimate the average treatment effect, we estimate
the following regression model:
Yi j = β0 +β1Ti j +β2Wi j + εi j (B.2)
24Ideally, we would have implemented the study with “pure” control polling stations. Pure control units are un-treated units that are not susceptible to spillover effects because there are no treated units in the same constituency (orlocal area). In our study, no control units were assigned to this pure control status. This decision was driven solelyby practical considerations. Given CODEO’s mission, which is to deter electoral malfeasance and enhance the qualityof elections across the country, we were unable to create constituencies in which no observers were present. It isan important part of CODEO’s mission to be present in all regions and constituencies of the country, in part so thatthe organization maintains credibility as an impartial observer. Therefore, we use control stations in low saturationconstituency as our main comparison set.
45
Here, β1 provides the estimate of the average treatment effect. It compares outcomes in
all treated polling stations to outcomes in control stations in the low saturation constituencies. We
cluster standard errors by constituency to account for the fact that the cluster-level treatments are
assigned at that level.
In Table 2, we reported average treatment effects. In Table B.1 we report results of the
same regressions including spillover effects.
46
Table B.1: Average Treatment Effects of Election Observers on Machine Breakdown and Fraud, Incorporating Spillover
Breakdown Overvoting rate Registry rigging Ballot stuffing
(1) (2) (3) (4) (5) (6) (7) (8)
Election observer −0.113∗ −0.109∗ −0.026 −0.028 −0.087∗ −0.084∗ −0.005 −0.005(0.048) (0.052) (0.015) (0.015) (0.040) (0.039) (0.026) (0.028)
Competitive 0.078∗ 0.013 0.039 0.023(0.037) (0.009) (0.023) (0.019)
Urban 0.034 −0.008 0.029 0.003(0.034) (0.009) (0.023) (0.017)
Spillover 0.095 0.099 0.014 0.012 0.048 0.051 0.035 0.035(0.062) (0.065) (0.019) (0.018) (0.050) (0.049) (0.042) (0.041)
Constant 0.285∗∗∗ 0.230∗∗∗ 0.041∗∗ 0.041∗∗ 0.221∗∗∗ 0.186∗∗∗ 0.036 0.025(0.044) (0.054) (0.015) (0.016) (0.039) (0.045) (0.025) (0.027)
Observations 1,888 1,888 1,864 1,864 1,973 1,973 1,987 1,987R2 0.045 0.054 0.008 0.010 0.024 0.028 0.007 0.011
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. OLS with robust standard errors clustered by constituency in parentheses. Eleven polling stations withoutbiometric verification machines removed from sample of machine breakdown. Polling stations where enumerators recorded different responses from twoparty officials on relevant variables are dropped.
47
C Covariate Balance Tests
In this section, we present data showing balance on various dimensions for treated and control
polling stations. We use data from a household survey we conducted in the communities near
observed and unobserved polling places during the two days following the elections.25 As part of
the survey, we gathered data on voting behavior in the prior 2008 election as well as measures of
socio-economic conditions and ethnic self-identification.
Table C.1 presents means in control and observed communities on a number of pre-election
covariates. It also presents the difference in these means and the p-value of a two-tailed difference-
of-means test. The first section of the table shows that the partisan voting histories of residents near
observed and unobserved polling are comparable. In both sets of communities, about 35 percent
report voting for the NPP in the 2008 presidential election, while about 43 percent report voting
for the NDC, whose candidate was the winner of that election. The remaining sections of the
table examine measures of education, poverty and well-being, and ethnicity. Observed and control
polling stations are also similar along these dimensions. The data presented in the table shows that
the communities surrounding our observed and control polling stations are comparable across a
range of political, ethnic and and socio-economic characteristics that could potentially affect the
level of election fraud.
25We surveyed over 6,000 Ghanaians. Ideally, we would have randomly sampled individuals from the official voterregister. Because this was not available, we employed the random sampling techniques used across Africa by theAfrobarometer public opinion survey. Our enumerators visited each of our approximately 2,000 sampled polling placeand then selected four households using a random walk technique.
48
Table C.1: Polling Station (Unit) Level Covariate BalanceMean Treatment Mean Control Difference P-Value
NPP Presidential Vote 2008 0.359 0.368 -0.009 0.562NDC Presidential Vote 2008 0.425 0.426 -0.000 0.975NPP Parliamentary Vote 2008 0.359 0.391 -0.032 0.034NDC Parliamentary Vote 2008 0.408 0.401 0.008 0.614
Poverty index 0.956 0.981 -0.025 0.151Electricity 1.117 1.156 -0.039 0.104Medicine 0.896 0.886 0.010 0.659Sufficient Food 0.840 0.879 -0.039 0.106Cash Income 0.970 1.002 -0.031 0.143
No Formal Schooling 0.145 0.145 0.000 0.987Completed Primary Schooling 0.716 0.698 0.018 0.206Post Primary Schooling 0.543 0.522 0.021 0.184Formal House 0.181 0.177 0.004 0.737Concrete Permanent House 0.427 0.416 0.011 0.463Concrete and Mud House 0.218 0.219 -0.001 0.952Mud House 0.168 0.181 -0.014 0.242
Akan 0.685 0.699 -0.013 0.350Ga 0.021 0.018 0.002 0.614Ewe 0.220 0.203 0.016 0.201Other, Refuse, or Don’t Know 0.074 0.079 -0.005 0.546
Notes: P-values calculated from two-tailed difference-of-means tests. Poverty index constructed by adding responsesto the following items and dividing by the total number of items: How often did you go without the following in thepast year, where the items are cash income; sufficient food; medicine; and electricity. Responses were: Never (0),Occasionally (1), and Most of the time (2).
49
D Data Collection and Robustness
In this section, we describe features of the data collection and analysis that led us to undertake
specific robustness checks. We present the results of the robustness checks and verify that the
results reported in the body of the paper remain unchanged.
Data was collected on election day from sampled polling stations by enumerators using the
questionnaire that appears in Appendix A. In treated areas, enumerators acted as election observers
for the whole of the day. These observer/enumerators remained in a single randomly selected
polling station through the vote count. They recorded the information reported and analyzed here
by observing events at the polling station as well as the vote count. Due to logistical challenges and
because we wanted to avoid treating control stations by sending enumerators to observe activities
throughout the course of election day, other enumerators were assigned to visit three to four control
polling stations after the polls had closed at 17:00 and, using identical questionnaires, to collect the
same information. They were instructed to collect information from two persons, ideally the two
official representatives of the major political parties in each polling place. These representatives
typically gather similar information to report to central party offices. If enumerators could not
speak with a party representative because the person was unavailable, enumerators were instructed
to collect the information from the presiding officer. Members of both groups of enumerators
received identical training, were officially designated CODEO election observers, and received
appropriate identification materials that permitted them entry into polling stations.
The analysis reported in this paper relies on four pieces of information collected by enu-
merators: (1) the number of rejected ballots; (2) the number of valid votes; (3) whether more ballot
papers were found in the presidential ballot box than had been cast; and (4) whether at any point
during the data the biometric verification machine malfunctioned. In addition, we use data supplied
by the Election Commission on the number of registered voters at each polling station.
The analysis reported in the main body of the paper uses all observations where enumer-
ators only were able to collect data from one party official as well as all observations where the
50
enumerator was able to reach both party officials and the answers were identical along the vari-
ables of interest. This means the main analysis drops only observations where the party officials
explicitly disagreed. In the robustness tests presented below, we drop observations that fail to
meet one of two criteria: the enumerator collected the information through direct observation (i.e.
the polling station was a treated unit) or the enumerator collected identical information from two
separate respondents. Thus, observations with only one party official’s responses are no longer
included.
Table D.1 and Table D.2 replicate the main analyses from the results Section using this
smaller dataset. All the results remain stable.
51
Table D.1: Descriptive Statistics of Machine Breakdown and Measures of Fraud, Smaller Dataset(1) (2) (3) (4) (5) (6) (7)
Full Sample Treatment Control Competitive Stronghold Urban RuralMachine breakdown 0.223 0.172 0.335 0.264 0.192 0.238 0.206
(0.416) (0.378) (0.472) (0.441) (0.394) (0.426) (0.404)
Overvoting rate 0.025 0.015 0.045 0.029 0.021 0.021 0.028(0.185) (0.152) (0.242) (0.188) (0.184) (0.171) (0.200)
Registry rigging 0.172 0.134 0.248 0.190 0.159 0.185 0.157(0.377) (0.340) (0.432) (0.392) (0.366) (0.389) (0.364)
Ballot stuffing 0.041 0.032 0.060 0.053 0.033 0.043 0.039(0.199) (0.175) (0.238) (0.224) (0.178) (0.203) (0.194)
Observations 1988 1281 707 831 1157 1051 937
Notes: Standard deviations in parentheses. Ten polling stations without biometric verification machines removed from sample. Polling stations whereenumerators recorded different responses from two party officials on relevant variables or responses from only one party official are dropped.
52
Table D.2: Average Treatment Effects of Election Observers on Machine Breakdown and Fraud, Smaller Dataset
Breakdown Overvoting rate Registry rigging Ballot stuffing
(1) (2) (3) (4) (5) (6) (7) (8)
Election observer −0.163∗∗∗ −0.163∗∗∗ −0.030∗∗ −0.031∗∗ −0.115∗∗∗ −0.114∗∗∗ −0.029 −0.029(0.040) (0.041) (0.011) (0.011) (0.025) (0.025) (0.029) (0.030)
Competitive 0.076 0.008 0.035 0.022(0.039) (0.009) (0.023) (0.020)
Urban 0.028 −0.008 0.024 0.004(0.035) (0.009) (0.022) (0.019)
Constant 0.335∗∗∗ 0.288∗∗∗ 0.045∗∗∗ 0.047∗∗∗ 0.248∗∗∗ 0.221∗∗∗ 0.060∗ 0.050(0.037) (0.041) (0.010) (0.012) (0.025) (0.029) (0.028) (0.029)
Observations 1,814 1,814 1,786 1,786 1,896 1,896 1,895 1,895R2 0.033 0.042 0.006 0.007 0.021 0.024 0.005 0.008
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. OLS with robust standard errors clustered by constituency in parentheses. Ten polling stations withoutbiometric verification machines removed from sample of machine breakdown. Polling stations where enumerators recorded different responses from twoparty officials on relevant variables or responses from only one party official are dropped.
53
Table D.3: Effect of Election Observers and Competitiveness on Machine Breakdown and Fraud, Smaller Dataset
Breakdown Overvoting rate Registry rigging Ballot stuffing
(1) (2) (3) (4)
Election Observer −0.157∗∗∗ −0.030∗∗ −0.111∗∗∗ −0.028(0.041) (0.011) (0.025) (0.029)
Urban 0.023 −0.009 0.022 0.002(0.035) (0.009) (0.022) (0.020)
Margin −0.159∗∗ −0.005 −0.082∗ −0.035(0.056) (0.017) (0.039) (0.024)
Constant 0.368∗∗∗ 0.052∗∗∗ 0.261∗∗∗ 0.069(0.054) (0.014) (0.028) (0.042)
Observations 1,814 1,786 1,896 1,895R2 0.045 0.006 0.025 0.007
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. OLS with robust standard errors clustered by constituency in parentheses. Ten polling stations withoutbiometric verification machines removed from sample of machine breakdown. Polling stations where enumerators recorded different responses from twoparty officials on relevant variables or responses from only one party official are dropped.
54
Table D.4: Interactive Effect of Machine Breakdown and Election Observers on Fraud, Smaller Dataset
Overvoting rate Registry rigging Ballot stuffing
OLS logistic logistic
(1) (2) (3)
Machine breakdown 0.001 0.388 1.290∗∗∗
(0.020) (0.235) (0.333)
Election observer −0.025∗ −0.674∗∗ −0.342(0.010) (0.206) (0.564)
Competitive 0.007 0.302 0.500(0.010) (0.164) (0.392)
Urban −0.004 0.158 −0.001(0.009) (0.161) (0.418)
Breakdown X observer 0.018 −0.341 −0.680(0.027) (0.345) (0.458)
Constant 0.036∗∗∗ −1.472∗∗∗ −3.470∗∗∗
(0.011) (0.228) (0.508)
Observations 1,676 1,759 1,766
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. OLS in Column 1 and logistic regression in Columns 2 and 3, with robust standard errors clustered byconstituency in parentheses. Ten polling stations without biometric verification machines removed from sample of machine breakdown. Polling stationswhere enumerators recorded different responses from two party officials on relevant variables or responses from only one party official are dropped.
55
E Alternative Specifications of Average Treatment Effects
56
Table E.1: Average Treatment Effects of Election Observers on Machine Breakdown and Fraud, Logistic Models
Breakdown Registry rigging Ballot stuffing
(1) (2) (3) (4) (5) (6)
Election observer −0.971∗∗∗ −0.974∗∗∗ −0.798∗∗∗ −0.796∗∗∗ −0.711 −0.720(0.188) (0.194) (0.146) (0.149) (0.496) (0.512)
Competitive 0.452∗ 0.273 0.561(0.212) (0.163) (0.402)
Urban 0.180 0.189 0.037(0.211) (0.160) (0.437)
Constant −0.600∗∗∗ −0.903∗∗∗ −1.071∗∗∗ −1.295∗∗∗ −2.711∗∗∗ −2.997∗∗∗
(0.153) (0.199) (0.130) (0.175) (0.440) (0.494)
Observations 1,888 1,888 1,973 1,973 1,987 1,987Akaike Inf. Crit. 1,981.043 1,967.207 1,803.952 1,800.830 695.955 693.672
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. Logistic regression with robust standard errors clustered by constituency in parentheses. Ten polling stationswithout biometric verification machines removed from sample of machine breakdown. Polling stations where enumerators recorded different responses fromtwo party officials are dropped.
57
Table E.2: Average Treatment Effects of Election Observers on Machine Breakdown and Fraud, Non-Robust Standard Errors
Breakdown Overvoting rate Registry rigging Ballot stuffing
(1) (2) (3) (4) (5) (6) (7) (8)
Election observer −0.182∗∗∗ −0.181∗∗∗ −0.036∗∗∗ −0.037∗∗∗ −0.122∗∗∗ −0.121∗∗∗ −0.031∗∗ −0.031∗∗
(0.020) (0.020) (0.009) (0.009) (0.018) (0.018) (0.009) (0.009)
Competitive 0.078∗∗∗ 0.013 0.039∗ 0.023∗
(0.019) (0.009) (0.017) (0.009)
Urban 0.030 −0.008 0.026 0.001(0.019) (0.009) (0.017) (0.009)
Constant 0.354∗∗∗ 0.304∗∗∗ 0.051∗∗∗ 0.050∗∗∗ 0.255∗∗∗ 0.224∗∗∗ 0.062∗∗∗ 0.052∗∗∗
(0.016) (0.022) (0.008) (0.010) (0.014) (0.019) (0.008) (0.010)
Observations 1,888 1,888 1,864 1,864 1,973 1,973 1,987 1,987R2 0.042 0.051 0.008 0.009 0.023 0.027 0.005 0.009
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. OLS with non-robust standard errors. Ten polling stations without biometric verification machines removedfrom sample of machine breakdown. Polling stations where enumerators recorded different responses from two party officials are dropped.
58
F Estimates of Spillover of Machine Breakdown onto Control Polling Stations
59
Figure F.1: Spillover of Machine Breakdown onto Control Polling Stations
●
●●
0.2
0.3
0.4
0.5
Low Middle HighConstituency Observer Saturation
Mea
n B
reak
dow
n R
ate
Panel A: Breakdown in Control Stations by Saturation
● ●
●
0.2
0.3
0.4
0.5
Low Middle HighConstituency Observer Saturation
Mea
n B
reak
dow
n R
ate
● Stronghold Competitive
Panel B: Breakdown in Control Stations by Saturation and Competitiveness
Notes: These plots show the mean level of machine breakdown in control stations by observer saturation and com-petitiveness, with bootstrapped 95% confidence intervals. Panel A uses the full set of control stations, while Panel Bdisaggregates them by competitiveness. In the aggregated sample, the differences in breakdown in control stations arenot significantly different across saturation levels once standard errors are clustered at the constituency level. However,the level of breakdown in control stations in middle and high saturation competitive constituencies are significantlyhigher than in the low saturation competitive constituencies (again with standard errors clustered by constituency).This provides strong evidence for spillovers.
60
G Cutoffs for Registry Rigging and Registry Difference
Figure G.1 demonstrates that registry rigging is robust to different cutoffs that allow for adminis-
trative error. Table G.1 replicates the regressions in Table 2 and Table 4 using a continuous rather
than discrete measure of registry rigging. The measure registry difference is continuous from−923
to 1314, with a large number of values at 0 where the registry number matched the official EC num-
ber. The measure is constructed by subtracting the Electoral Commission figures from the number
of registered voters according to the local rolls. Positive numbers indicate the number on the lo-
cal rolls was larger than the Electoral Commission figure. In two polling stations the local rolls
showed figures greater than 8000. These appear to be simple transcription errors. In seven polling
stations the local roll numbers were not available. These polling stations are all dropped.26
The results in Table G.1 show that control stations had significant positive values of registry
difference while observers reduced that value to almost 0. This would support the story that registry
rigging was deliberately done to inflate the voter rolls. Furthermore, although the standard errors
are quite large, the second column is consistent with the story that registry rigging to inflate the
rolls was encouraged by machine breakdown, although that relationship disappears in the presence
of an observer.
26Their inclusion only strengthens our findings, but it is inappropriate to include them as they are clear outliersdriven by erroneous data collection that drive the regressions below
61
Figure G.1: Allowance for Administrative Discrepancies When Constructing Registry Rigging
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●● ●●●●● ●●●●●●●
●● ●●●●●●●● ●● ●●●●●●●●●●●●●● ●●●●●●● ●●●●●●
●●●● ●●●● ●●●●●●●●●●●●●●● ●●●●●●
●●●●●●●●●●●●●●●●●●●●● ●●● ●●●● ●● ●●●●● ●● ●●●●●●●
●●●●●●●●●●●●●●●●
●●●●●●●●●● ●●●●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●
0
100
200
300
400
500
0.0 0.1 0.2P−value
Allo
wan
ce fo
r ad
min
istr
ativ
e er
ror
Notes: This plot demonstrates the p-values for unequal variance t-tests for a range of cutoffs for the rigging variable. The y-axis describes the allowancefor administrative error and the x-axis is the p-value of the ATE effect of observers on registry rigging. The dashed vertical line is at 0.01 and the solidhorizontal line is at 0.05. As we move along the y-axis, we only code a polling station with registry rigging if the difference between the number of registeredvoters according to the Electoral Commission and the local figure at the polling station is greater than or equal to the value of the y-axis — the cutoff. Thetreatment effect of observers remains significantly negative until the cutoff is 357, meaning until we treat only discrepancies greater than or equal to 357 asregistry rigging, observers have a significant depressive effect on registry rigging.
62
Table G.1: Effect of Observers and Machine Breakdown on Registry Difference
Registry Difference
(1) (2)
Machine breakdown 25.410(16.768)
Election observer −19.651∗∗ −14.302∗
(6.784) (5.712)
Competitive 2.725 2.621(5.690) (5.671)
Urban −3.786 −3.126(5.471) (5.636)
Breakdown X observer −24.981(15.768)
Constant 22.329∗∗ 16.166∗∗
(7.159) (6.052)
Observations 1,973 1,830R2 0.008 0.014
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05. OLS with non-robust standard errors. Ten polling stations withoutbiometric verification machines removed from sample of machine breakdown. Polling stations where enumeratorsrecorded different responses from two party officials are dropped.
63
H Holm Correction
For a brief introduction and overview of the Holm correction, see Aickin and Gensler (1996). It
is very easily implemented in R using the p.adjust function. In the table below, we report the
average treatment effect of election observers on a host of outcome variables. We include the
effect size, the traditional p-value, the Holm corrected p-value, and the Bonferroni corrected p-
value. Notice that the key treatment effects that were originally significant remain significant even
using the stricter Bonferroni correction.
64
Table H.1: Effect Sizes and Corrected P-Values of 43 OutcomesVariable Treatment Control Difference P-Value Holm Adj. Bonferroni Adj.Time voting commenced (Early[1] - Late[5]) 1.547 1.282 0.265 0.000 0.000 0.000Station set up to ensure ballot secrecy (0/1) 0.837 0.958 -0.121 0.000 0.000 0.000No. of voters assisted (None[0] - Many[4]) 1.368 1.073 0.294 0.000 0.000 0.000Machine breakdown (0/1) 0.123 0.304 -0.181 0.000 0.000 0.000Registry rigging (0/1) 0.104 0.224 -0.121 0.000 0.000 0.000Election official present (0/1) 0.785 0.908 -0.123 0.000 0.000 0.000Was station accessible to elderly/disabled (0/1) 0.944 0.978 -0.033 0.000 0.007 0.008Overvoting dummy (0/1) 0.021 0.064 -0.044 0.000 0.007 0.009Security personnel present (0/1) 0.824 0.881 -0.058 0.000 0.008 0.010Overvoting rate 0.014 0.050 -0.037 0.000 0.010 0.013Election materials missing (0/1) 0.074 0.030 0.044 0.001 0.032 0.043Turnout 82.333 87.463 -5.130 0.001 0.038 0.052Voters w/ no ID permitted to vote (None[0] - Many[3]) 0.669 0.536 0.133 0.029 0.849 1.000Harrassment or intimidation of voters (0/1) 0.043 0.113 -0.070 0.030 0.849 1.000Voters w/out bio matches allowed to vote (None[0] - Many[3]) 0.049 0.155 -0.106 0.032 0.854 1.000Other party agents sign pres. results (0/1) 1.472 1.375 0.097 0.048 1.000 1.000Voters biometric registration verified (0/1) 0.991 0.968 0.023 0.049 1.000 1.000Voter not on register allowed to vote (None[0] - Many[3]) 0.032 0.136 -0.104 0.055 1.000 1.000NDC agent present (0/1) 0.995 0.986 0.009 0.061 1.000 1.000Unauthorized persons at station (0/1) 0.060 0.103 -0.043 0.081 1.000 1.000Voters queued at 5pm allowed to vote (0/1) 0.785 0.828 -0.043 0.117 1.000 1.000NDC agent sign pres. results (Yes[1], No[2], Not Present[3]) 1.023 1.046 -0.022 0.123 1.000 1.000Allowed voters arriving after 5 (Yes[1], No[2], NA[3]) 2.579 2.477 0.102 0.124 1.000 1.000Agents agree on parl. results (0/1) 1.000 0.979 0.021 0.145 1.000 1.000Voters marked with ink (0/1) 0.983 0.960 0.023 0.167 1.000 1.000Agents agree on pres. results (0/1) 0.996 0.976 0.020 0.194 1.000 1.000Count of intimidation events 0.024 0.050 -0.026 0.243 1.000 1.000Party agent allowed to observe (0/1) 0.995 0.999 -0.004 0.244 1.000 1.000Ballot stuffing (0/1) 0.021 0.052 -0.031 0.263 1.000 1.000Overall problems (Major[1], Minor[2], None[3]) 2.696 2.772 -0.076 0.271 1.000 1.000Ballot papers stamped by EC (0/1) 0.988 0.976 0.011 0.303 1.000 1.000No. of voters on proxy list 9.989 6.430 3.559 0.357 1.000 1.000Empty ballot boxes displayed pre-voting (0/1) 0.998 1.000 -0.002 0.368 1.000 1.000Voters queued at 5pm (0/1) 0.656 0.729 -0.074 0.387 1.000 1.000NPP votes 157.835 167.464 -9.630 0.422 1.000 1.000Voters w/ ID not permitted to vote (None[0] - Many[3]) 0.281 0.316 -0.036 0.452 1.000 1.000NDC votes 239.369 247.220 -7.851 0.456 1.000 1.000Party agents sign parl. results (0/1) 0.986 0.978 0.008 0.536 1.000 1.000NPP agent sign pres. results (Yes[1], No[2], Not Present[3]) 1.022 1.026 -0.005 0.628 1.000 1.000NPP agent present (0/1) 0.992 0.993 -0.001 0.820 1.000 1.000Request for recount (0/1) 0.078 0.082 -0.005 0.877 1.000 1.000
65