evaluation of m-tech and paper enumeration of social...
TRANSCRIPT
Evaluation of M-Tech and Paper
Enumeration of Social Cash Transfer
Beneficiaries in Zambia
Technical Report
Prepared for Zambia’s Ministry of Community Development,
Mother and Child Health, World Food Programme, and
Other Cooperating Partners
FINAL
25 November 2015
i
CONTENTS
Figures and Tables .................................................................................................................................. iii
Acronyms ................................................................................................................................................ v
About IDinsight ...................................................................................................................................... vi
Acknowledgements ............................................................................................................................... vii
Executive Summary ................................................................................................................................. 1
Background ......................................................................................................................................... 1
Evaluation Design ................................................................................................................................ 1
Results ................................................................................................................................................. 2
Recommendations ............................................................................................................................... 3
Background ......................................................................................................................................... 6
Evaluation Goals and Objectives .......................................................................................................... 7
Overview of SCT Enumeration and Registration ....................................................................................... 7
SCT Program ........................................................................................................................................ 7
Enumeration and Registration Process ................................................................................................ 8
M-tech Pilot ....................................................................................................................................... 10
Evaluation Setting ................................................................................................................................. 11
Project Timeline ................................................................................................................................ 11
Ethical Approval ................................................................................................................................ 12
Impact Evaluation .................................................................................................................................. 14
Sources of Error in Enumeration ........................................................................................................ 14
Outcomes of Interest ......................................................................................................................... 15
Study Design ...................................................................................................................................... 16
Sample Size ....................................................................................................................................... 16
Data Collection .................................................................................................................................. 17
Data Analysis ..................................................................................................................................... 19
Results ............................................................................................................................................... 21
Differences in Enumeration Speed between M-Tech and Paper ......................................................... 27
Limitations of Impact Evaluation ........................................................................................................ 29
Conclusions of Impact Evaluation ...................................................................................................... 31
Process evaluation ................................................................................................................................ 33
Description of Paper and M-tech Enumeration & Registration Process .............................................. 33
Evaluation Design .............................................................................................................................. 35
Data Collection .................................................................................................................................. 36
Results ............................................................................................................................................... 39
Conclusions of Process Evaluation ..................................................................................................... 67
Stakeholder analysis .............................................................................................................................. 69
ii
Analysis Questions ............................................................................................................................. 69
Methods ............................................................................................................................................ 69
Key Findings ...................................................................................................................................... 74
Discussion and Recommendations ......................................................................................................... 82
References ............................................................................................................................................ 84
Annex A. Power Calculations ................................................................................................................. 85
Sample Size Challenges ...................................................................................................................... 85
Back Power Calculations .................................................................................................................... 86
Annex B. Enumerator Characteristics ..................................................................................................... 87
Annex C. Additional details on matching activity and additional INCLUSION error analysis for serenje ... 89
Annex D. Additional Findings from Qualitative Interviews with Potential Beneficiaries .......................... 91
Annex E. CWAC Focus Group Discussions ............................................................................................... 93
Annex F. Transcript Analysis .................................................................................................................. 95
Annex G. Cost Model ............................................................................................................................. 98
Data .................................................................................................................................................. 98
Annex H. Cooperating Partners ............................................................................................................ 104
Donor Support ................................................................................................................................. 104
Policy Support ................................................................................................................................. 105
Technical Support ............................................................................................................................ 105
iii
FIGURES AND TABLES
Figure 1. Enumeration and Registration Process ...................................................................................... 9
Figure 2. Project Timeline ...................................................................................................................... 12
Figure 3. Differences in Design of the Household Member Section on Paper Survey and M-tech Application
.............................................................................................................................................................. 26
Figure 4. Design of Paper and M-Tech Processes ................................................................................... 34
Figure 5. Average Scores on General Tablet Skills Questions before the Start of Training in Shiwa N’gandu
.............................................................................................................................................................. 41
Figure 6. Average Scores after Training and at the End of Enumeration in Mungwi and Shiwa N’gandu . 41
Figure 7 Matching CWAC Form 2 and Enumeration Data ....................................................................... 46
Figure 8. Enumerator Perceptions of Paper and M-tech ......................................................................... 50
Figure 9. Commonly Cited Challenges of M-tech Application and Potential Solutions ............................ 51
Figure 10. Data Used by CommCare Application .................................................................................... 53
Figure 11. Data Used by All Other Applications ...................................................................................... 53
Figure 12. Use of the Supervisor Application .......................................................................................... 54
Figure 13. Expected Timeline of Paper and M-tech Enumeration by Design ........................................... 63
Figure 14. Actual Timeline of Paper and M-tech Enumeration ............................................................... 64
Figure 15. Enumeration and Data Entry Dates ....................................................................................... 64
Figure 16. Organizational Structure of SCT Unit ..................................................................................... 71
Figure 17. Map of SCT Cooperating Partners .......................................................................................... 74
Figure 18. Effect of Transition on Roles and Responsibilities within the SCT Unit ................................... 76
Table 1. Omission Rates with Paper and M-Tech Enumeration Methods ................................................ 21
Table 2. Mixed Model Results of Effect of Enumeration Method on Omitted Questions ........................ 22
Table 3. Error Rates with Paper and M-Tech Enumeration Methods ...................................................... 22
Table 4. Mixed Model Results of Effect of Enumeration Method on Error Rates .................................... 23
Table 5. Mixed Model Test of Effect of Enumeration Method on Error Rates Using MIS Data................. 24
Table 6. Mixed Model Results of Effect of Enumeration Method on Error Rates with Penalty for Missing
Members ............................................................................................................................................... 24
Table 7. Process Errors by Enumeration Method ................................................................................... 25
Table 8. Error Rates in Data Entry .......................................................................................................... 27
Table 9. Average Duration of Interviews with Paper and M-tech ............................................................ 28
Table 10. Mixed Model Results of Difference in Interview Length between Paper and M-tech ............... 28
Table 11. Errors in the Entry of Household Demographic Information on CWAC Form 2 ........................ 44
Table 12. Data Sources for Inclusion and Exclusion Error Analysis .......................................................... 45
Table 13. Inclusion and Exclusion Errors ................................................................................................ 47
Table 14. Explanations for Inclusion and Exclusion Errors ...................................................................... 48
Table 15. Data Used by Enumerator Tablets .......................................................................................... 52
Table 16. Deviations from Enumeration Protocol ................................................................................... 56
iv
Table 17. Criteria for Translation Categories .......................................................................................... 57
Table 18. Examples of Translation Coding and Categorization System .................................................... 57
Table 19. Number of Unique Translations Used by Each Enumerator ..................................................... 59
Table 20. Share of Recorded Interviews with Inaccurate, Unclear, or Leading Translation for Select
Questions .............................................................................................................................................. 60
Table 21. Deviations from Enumeration Protocol ................................................................................... 61
Table 22. Changes to Forms between Enumeration and End of District Review, By Page ........................ 62
Table 23. Fixed Costs of M-Tech Scale-Up .............................................................................................. 65
Table 24. Differences in Variable Costs of 2016 – 2017 SCT Scale-Up per District ................................... 66
Table 25. SCT Program Stakeholders ...................................................................................................... 70
Table 26. Changes to Roles and Responsibilities in Social Welfare Department ...................................... 77
v
ACRONYMS
ACC Area Coordination Committee
AIR American Institute for Research
ANOVA Analysis of Variance
ASWO Assistant Social Welfare Officer
CPs Cooperating Partners
CSO Central Statistics Office
CWAC Community Welfare Assistance Committee
DFID Department for International Development
DSA Daily Subsistence Allowance
DSWO District Social Welfare Officer
DWAC District Welfare Assistance Committee
GRZ Government of the Republic of Zambia
HQ Headquarters
ILO International Labor Organization
IRB Institutional Review Board
MB Megabytes
MCDMCH Ministry of Community Development, Mother and Child Health
MIS Management Information System
ODK Open Data Kit
PDA Personal Digital Assistant
PMT Proxy Means Test
PSWO Provincial Social Welfare Officer
PWAS Public Welfare Assistance Scheme
QA Quality Assurance
SCT Social Cash Transfer
SMS Short Message Service
SQL Standard Query Language
SSWO Senior Social Welfare Officer
UNICEF United Nations Children’s Fund
WFP World Food Programme
ZMW Zambian Kwacha (rebased)
vi
ABOUT IDINSIGHT
IDinsight helps clients generate and use evidence to improve social impact. Depending on client needs,
we diagnose systems, design and test potential solutions, and operationalize those solutions found to be
most impactful. We believe that client-centered, rigorous, and responsive evaluation is essential to help
managers maximize program impact. Our team has collectively coordinated over 25 impact evaluations
in Africa and Asia, and works on-site with client organizations to efficiently answer important program
questions.
The research team from IDinsight consisted of Alison Connor, Kasamba Mukonde, Marla Spivack and Paul
Wang.
For more information about IDinsight, please visit www.IDinsight.org. For questions about this evaluation
please contact Marla Spivack at [email protected].
vii
ACKNOWLEDGEMENTS
Many individuals provided input integral to the work presented here.
At the Ministry of Community Development, Mother and Child Health, Most Mwamba assisted with the
design and implementation of the evaluation and supported all fieldwork activities. Social Welfare
Officers at Headquarters, Northern, and Central Provincial Offices, and Mungwi and Serenje District
Offices shared valuable insights on the SCT program and m-tech pilot. We want to especially thank
Muchinga Provincial staff and Shiwa N’gandu District staff, who modified their enumeration work plans
to accommodate evaluation activities. CWAC members in all three evaluation districts participated in
focus group discussions, and potential beneficiaries in Shiwa N’gandu participated in qualitative
interviews.
Stakeholders, including representatives from the American Institute for Research, The Zambian Central
Statistics Office, Department for International Development, Dimagi, International Labour Organization,
Irish Aid, the Embassy of Finland, WFP, and UNICEF, participated in in-depth interviews about their roles
in the SCT program.
Samantha Malambo, Calum McGregor, and Emily Heneghan of World Food Programme and Qimti
Paienjton of UNICEF supported the design and execution of the project. Mofya Phiri of Dimagi assisted
with making adjustments to the CommCare application to facilitate the evaluation and provided us with
the CommCare data from the pilot districts.
Excellent research assistance was provided by IDinsight field staff (Rhoda Pearce, Elvis Nsamba, Jane
Nakazwe, Malumbo Machimu, Charity Muchanga, Lucy Phiri, and Rose Lungu) and data enterers (Makala
Shadunka, Kangelani Mashitu, Modesta Phiri, Madalitso Mwanza, John Phiri, Joseph Mwamba, Joel
Mumba, Lora Romano, Dianna Alvara, and Rebecca Gifford).
Finally, we would like to acknowledge the Shiwa N’gandu enumerators who played a key role in this
evaluation.
1
EXECUTIVE SUMMARY
Background
Zambia’s Social Cash Transfer (SCT) program provides unconditional cash assistance to Zambia’s poorest
households. Run by the Ministry of Community Development, Mother and Child Health (MCDMCH), the
SCT program was operational in 50 districts in 2014 and is expected to be scaled nationwide by 2017.
To date, a paper-based system has been used to register and enumerate potential beneficiary households,
requiring management, transport, and data entry of thousands of paper forms. This system has been
labor-intensive and vulnerable to error, prompting MCDMCH, World Food Programme (WFP) and other
cooperating partners to pilot a mobile technology enumeration system in Mwense, Mungwi, and Shiwa
N’gandu Districts in hopes that an m-tech system could increase accuracy, reduce time, and reduce cost
compared to the paper system.
Evaluation Design
In October 2014, WFP engaged IDinsight to evaluate the pilot m-tech system. The evaluation team
employed a mixed methods approach consisting of three activities:
1. An impact evaluation that used a cross-over design to compare speed and error rates of paper
and m-tech enumeration.
2. Process evaluations of the paper and m-tech systems that compared the timeframes, costs,
logistics, and challenges of both approaches.
3. A stakeholder analysis that assessed perceptions, priorities, and involvement levels of individuals
and institutions involved in the SCT program.
HIGH LEVEL CONCLUSION
Due to critical – but surmountable – challenges a new m-tech enumeration and registration
system for the Social Cash Transfer (SCT) program did not outperform the existing paper system
in a small-scale pilot. These challenges were largely related to an isolated design flaw in the
application, logistical challenges with power and network, and poor compatibility between the m-
tech database and the existing Management Information System. Once these challenges are
addressed and robust systems to manage all aspects of the data collection can be put in place, the
potential of mobile data collection to improve accuracy, quality, and speed of SCT enumeration
and registration can be realized.
2
Results
The evaluation found that the m-tech system did not meaningfully outperform the paper system, but that
adjustments could be made to significantly improve m-tech performance.
Impact evaluation results
M-tech enumeration resulted in a statistically significant reduction in the number of survey
questions that enumerators failed to ask respondents when interviewing compared to paper
enumeration.
M-tech enumeration produced comparable recording error1 rates between paper and m-tech
methods on the overall interview and among key questions for eligibility.
M-tech weaknesses largely resulted from a design flaw of the application that separated the
household roster and household characteristics components of the survey.
Process evaluation results
Most components of the m-tech process were identical to the paper process, except for the embedding
of error-checks within the program and the elimination of manual data entry.
Two key process challenges significantly hindered the effectiveness of the m-tech system:
1. Network and electricity issues introduced delays and undermined m-tech benefits.
2. Challenges aligning the mobile application with MCDMCH’s Management Information
System (MIS), exacerbated by poor definition of roles and responsibilities and
communication challenges between partners, significantly delayed beneficiary
enrollment.
Both the m-tech and paper processes were vulnerable to errors in which households were
incorrectly included in and excluded from enumeration.
Enumerators used a wide range of phrases to translate questions into local languages, and many
phrases they used were inaccurate, ambiguous, or leading. Both m-tech and paper enumeration
were vulnerable to this challenge and to other deviations from enumeration protocols.
1 Recording errors were defined as any discrepancy between respondents’ answers to questions captured on audio recordings of the interviews and those captured on the tablets or paper SCT Form 1.
3
The differences in fixed and variable costs of m-tech and paper systems were insignificant relative
to the overall cost of enumeration.
Stakeholder analysis
Stakeholders throughout the government and cooperating partners were enthusiastic about the SCT program and transitioning to the m-tech system.
Government stakeholders may underestimate the effort and changes required to effectively transition to an m-tech system.
Unclear role delineation between the government and cooperating partners can cause delays and weaken MCDMCH ownership.
Officials across government and cooperating partners have differing priorities for the future uses of m-tech.
Recommendations
Eight action areas emerged to improve the accuracy and efficiency of the m-tech system. Three of these
are aimed at addressing the critical challenges which resulted in the underperformance of the m-tech
system in the pilot, and five outline steps that will support the effective operation of the m-tech system
at scale.
Actions to address critical challenges:
1. Application design: Addressing issues with the design and programming of the application will
improve consistency and accuracy of data collection.
2. Network and electricity safeguards: Providing equipment to connect reliably to the internet at
district offices and employing better electricity safe guards will optimize the m-tech system.
3. Data management: Aligning the m-tech system with the MIS and enabling remote access to the
data will streamline data analysis and use.
Actions to build an effective system:
4. Roles and responsibilities: Clearly outlining roles and responsibilities will facilitate manageable
workloads, accountability, and effective implementation.
5. Training: Creating training modules with m-tech components targeted at each level of the SCT
personnel will be necessary for smooth implementation.
6. Equipment management: Developing equipment procurement and maintenance protocols will
enhance accountability and reduce risk of leakage.
4
7. Scheduling and duration of enumeration: Carefully planning enumeration schedules and
developing transparent compensation protocols will encourage enumerators to complete their
caseloads quickly and accurately.
8. Supervision and quality assurance: Using m-tech to strengthen supervision and quality assurance
(QA) mechanisms will improve accuracy and fairness of the SCT program.
These action areas are described in more detail in the Implementation Guide which accompanies this
Technical Report. M-tech has the potential to improve the accuracy and efficiency of the SCT program,
ultimately enhancing its ability to serve Zambia’s poorest households. Robust systems to manage all
aspects of the data collection must be in place to ensure that the full benefits of mobile data collection
can be realized.
5
INTRODUCTION Since 2003, the Government of the Republic of Zambia’s (GRZ) Social Cash Transfer (SCT) program has
provided unconditional cash assistance to Zambia’s poorest families. Under the management of the
Ministry of Community Development, Mother and Child Health (MCDMCH), the SCT program expanded
from 19 to 50 districts in 2014. MCDMCH plans to continue scale-up and to achieve full coverage of all
districts nationwide by the end of 2017.
Enumeration of potential beneficiary households is currently paper-based, requiring completion,
management, transport, and data entry of thousands of physical forms. This process is labor-intensive and
vulnerable to error, prompting MCDMCH, World Food Programme (WFP) and other cooperating partners
(CPs) to pilot a mobile technology (“m-tech”) enumeration system in Mwense, Mungwi and Shiwa
N’gandu districts. M-tech enumeration has the potential to increase accuracy and reduce the time and
cost required for enumeration. However, these potential benefits have not previously been confirmed in
the context of the SCT program.
WFP engaged IDinsight to conduct an evaluation to compare the m-tech and paper-based enumeration
processes. Evaluation findings will inform decisions about the SCT enumeration process for the program’s
national scale-up.
The mixed-methods evaluation consisted of three activities:
1. An impact evaluation that used a cross-over design2 to compare speed and error rates of
paper surveys and m-tech enumeration.
2. Process evaluations of the paper-based and m-tech systems that combined quantitative and
qualitative analyses to compare the timeframes, costs, logistics and challenges of these
systems.
3. A stakeholder analysis that assessed perceptions, priorities, and levels of involvement of key
individuals and institutions within the SCT program and potential effects of any transition to
m-tech enumeration.
This report presents the methods and findings of these activities. It informs a separate Implementation
Guide that provides operational recommendations to MCDMCH and CPs as they continue to scale up the
SCT program across Zambia.
2 Enumerators used both m-tech and paper-based methods for enumeration. Their performance with one method was compared to their performance with the other.
6
Background
Sixty percent of all Zambians – and 78% in rural areas – live below the national poverty line (World Bank
2014). Shock-induced and chronic poverty can have material impacts on the health and well-being of
those affected, often perpetuating intergenerational poverty traps (Samson 2009). Due to the effects of
extreme poverty, governments around the world have introduced SCT programs for their most vulnerable
populations.
A growing evidence base, both in Zambia and in other contexts, has demonstrated the effectiveness of
cash transfers. Evaluations in Zambia, Liberia, and Malawi have found that SCT programs can reduce
extreme poverty, improve food security and agriculture and livestock production, increase school
attendance, and improve health and health-seeking behaviors (American Institutes for Research 2013;
Government of Liberia, Ministry of Gender and Development, European Union, and UNICEF 2012; Miller,
Tsoka, and Reichert 2008). Additionally, communities and nations at large experience indirect benefits of
SCT programs. Around 80% of the social cash transfers in Zambia are spent on locally purchased goods
(Samson 2009). Similarly, business owners in pilot areas for a Liberian SCT attributed 20-50% of their
businesses’ growth to the SCT program, whose beneficiaries were cited as valuable customers
(Government of Liberia, Ministry of Gender and Development, European Union, and UNICEF 2012).
Based on findings like these, the GRZ has committed to expanding coverage of its SCT program. MCDMCH,
as well as its CPs, are now focused on optimizing implementation at-scale.
To date, completion and processing of paper enumeration to determine SCT eligibility has caused delays
in enrolling households on the program. Some households who were enumerated in early 2014 did not
start receiving their SCT benefit until up to six months after they were enumerated. There is an emerging
global consensus that electronic data collection methods are preferable to paper-based data collection.
Studies have demonstrated that electronic enumeration methods on personal digital assistants (PDAs),
smartphones, or tablets often result in fewer errors, produce cleaned and complete datasets sooner, and
are less costly to run over time compared to paper methods (Dale and Hagen 2007; Thriemer et al. 2012;
Seebregts et al. 2009; Njuguna et al. 2014). A 2010 study in Tanzania compared the number of survey
errors and survey time between a paper and electronic enumeration system. It found that electronic
enumeration was 14% faster than paper enumeration with fewer missing values and erroneous entries
(Caeyers et al. 2010).
Despite the advantages of mobile data collection over paper-based methods, technological challenges can
prohibit the optimization of these benefits (Dale and Hagen 2007). A study from Kenya that compared m-
tech with paper-based disease surveillance systems, for example, found that while m-tech data were
available sooner than those collected on paper, the median time between enumeration and data
uploading was still seven days (range 1-13). This was a result of poor network coverage and server
challenges (Njuguna et al. 2014).
7
While there are many intuitive advantages of electronic data collection, limited rigorous evidence exists
to quantify its benefits over traditional paper surveys. Moreover, little research has been done to examine
the possible benefits and drawbacks of the two approaches in the Zambian SCT context.
Evaluation Goals and Objectives
The goals of the evaluation were to compare the speed, accuracy, operational complexity and cost of
paper-based and m-tech enumeration, and to explore how a transition to an electronic data collection
system could affect the enumeration and registration processes. This evaluation sought to answer
questions of direct relevance to MCDMCH’s upcoming decision on the enumeration and registration
method that will be adopted for the nationwide scale-up of the SCT program.
The evaluation had three specific objectives:
1. Compare the quality of and time required for paper-based and m-tech enumeration and registration.
2. Identify the financial burden and operational challenges associated with enumeration and registration for each method at-scale, and propose solutions.
3. Examine and compare the perceptions of various stakeholders of paper and m-tech enumeration and registration.
OVERVIEW OF SCT ENUMERATION AND REGISTRATION
SCT Program
The SCT program aims to reach Zambia’s most vulnerable households. Under the current targeting
criteria,3 households qualify for the cash transfer if they pass three tests:
1. Residency test: Household has lived in the district for at least six months.
2. Incapacity test: Household does not have any adult members (between ages 19 and 64) who are
able to work or has a dependency ratio4 that is greater than or equal to three.
3. Living conditions test: Household falls below a certain threshold according to a proxy means
test (PMT) designed to measure living conditions. Households with living conditions indicating
that they are relatively better off are excluded.
3 Since the initial pilot in 2003, MCDMCH has experimented with various sets of targeting criteria. The current criteria has been used since 2014. 4 The ratio of the number of household members who are unable to work to the number of adult members who are able to work.
8
Most households enrolled on the program receive a transfer of 140 Zambian kwacha (ZMW) every two
months. Households with a disabled member on the program receive transfers of 280 ZMW every two
months.
Enumeration and Registration Process
Enumeration Personnel
Enumeration and registration for the SCT program are led by district-level staff within MCDMCH’s Social
Welfare Department. The main personnel involved are:
Community Welfare Assistance Committees (CWACs):5 Small groups of community leaders help
with targeting and implementation of several social welfare programs administered by MCDMCH.
They generally serve as liaisons between MCDMCH and the communities. In the SCT process, their
role is to identify potentially eligible households.
Enumerators: District Social Welfare Officers (DSWOs), Community Development Officers,
Agriculture Extension Officers, and government school teachers work as short-term enumerators,
taking time from their daily tasks to complete enumeration (typically two to four weeks).
Enumerators participate in the enumeration exercise in their capacity as government officials, and
receive daily subsistence allowances (DSAs) during enumeration.
District Social Welfare Office staff: Three person district-level teams, consisting of the DSWO,
Assistant DSWO, and Assistant Program Officer, are responsible for supervising the enumeration
process. DSWOs support the enumerators, handle DSAs, and provide feedback to enumerators
during the enumeration process. Additionally, they work alongside quality assurance (QA) teams6
to quality check all forms. DSWOs are supported by the Provincial Social Welfare Office.
Enumeration Forms
The SCT enumeration and registration process involves the completion of two forms:
Form 8 CWAC 02 (CWAC Form 2):7 CWACs list the heads of household and household
composition - number of young, elderly, adults (aged 19-64) with disabilities or chronic illnesses,
5 “CWAC” is also used to refer to the area of which these committees are in charge. 6 For the second round of enumeration, during which the pilot took place, quality assurance was provided by a consulting firm called Palm Associates, contracted by UNICEF, to support the enumeration by assisting with training activities, conducting quality control visits to the field during enumeration, and reviewing paper forms for consistency. In m-tech pilot districts QA staff were given access to a QA application, which allowed them to view and correct forms. 7 The official name of this form is Form 8 CWAC 02, but it is commonly referred to as “CWAC Form 2” or “Form 2”, which is how we will refer to it throughout these documents. The number of the form does not denote the order in which it is completed; CWAC Form 2 is completed before Form 1.
9
and adults (aged 19-64) fit for work - of those households that they think might meet the SCT
program targeting criteria. They also indicate whether the household has been living in the area
for at least six months.
SCT Form 1: Enumerators visit each household on the enumeration list that passes the residency
and incapacity tests to collect information on household characteristics, assets, and each member
of the household.
Steps of Enumeration and Registration Process
The enumeration and registration process follows six steps that result in a community verified list of
eligible households to be enrolled in the SCT program (Figure 1).
Figure 1. Enumeration and Registration Process
1. Enumerator training: Enumerators are instructed on the enumeration and registration process
and protocols, as well as on the components of SCT Form 1. Additionally, CWACs and Area
Coordination Committees (ACCs) are trained on targeting criteria and the completion of CWAC
Form 2.8
2. Household identification: CWACs complete CWAC Form 2. The information is entered into an
Excel spreadsheets in the paper system and directly into the tablets in the m-tech system.
Households that pass the residency and dependency ratio tests qualify for enumeration.
8 These CWAC and ACC trainings were not included in this evaluation.
Training Identification(CWAC Form 2)
Enumeration( SCT Form 1)
Quality Check
Entry toMIS
SCT program Enrollment
Enumerators instructed on completion of SCT Form 1; CWACs and ACCs trained on identification and completion of CWAC Form 2
CWAC creates lists of eligible households on CWAC Form 2
Enumerators complete Form 1 with households on enumeration list
Community verifies eligibility list & households are enrolled
Supervisors & QA teams check for and correct errors
Form 1 data are transferred to the MIS database
Residency & incapacity tests to produce enumeration list
Proxy means test to produce eligibility list
10
3. Enumeration of households: Government enumerators visit all households that qualify for
enumeration to complete SCT Form 1. This step serves to collect new information on household
living conditions for the PMT, as well as to collect information to verify that the household meets
the residency and incapacity criteria.
4. Quality check: As enumerators complete SCT Form 1s, supervisors and QA teams check forms for
missing data and obvious errors. Errors that can be corrected are, while other forms can be sent
back to the field.
5. Entry to the Management Information System: All data that are collected on SCT Form 1 are
transferred to the SCT program’s MIS. In the paper process, this involves data entry which is
currently done at the Central Statistics Office (CSO) in Lusaka, while in the m-tech process, this
involves downloading data from the mobile application and then uploading the data to the
system.
6. SCT program eligibility: Once data are in the MIS, this information is used to determine which
households pass the PMT. These households are put on an initial eligibility list, which is then sent
back to the districts for community validation. Once the lists are approved by the community,
households are officially enrolled in the SCT program, and payments are initiated.9
M-tech Pilot
In early 2014, WFP engaged Dimagi, a social enterprise that specializes in developing software technology
for low-resource settings, to design a suite of applications that could replace the paper process:
Enumerator Application: Included electronic versions of CWAC Form 2 and SCT Form 1. Data from
the paper versions of CWAC Form 2 were entered into the electronic version of the form on the
tablet to determine which households qualified for enumeration. The electronic version of SCT
Form 1 was used for data collection in the field. It collected the same information and roughly
followed the same structure as the paper version.
Supervisor Application: Enabled supervisors to monitor enumerators’ daily workloads, provide
feedback, assess enumerators’ comfort level with the tablet, log observations made during
fieldwork, and report any technical issues.
Quality Assurance Application: Allowed QA teams to review forms that enumerators had
submitted to the server. These teams could log-in to each enumerator’s profile to review and edit
the enumerator forms that had been submitted.
The three applications were linked together through Dimagi’s CommCare platform. This cloud-based
system updated the Supervisor and QA Applications with the information gathered on the Enumerator
9 This step also was not included in this evaluation.
11
Applications each time devices were synced. All enumeration was done using Samsung Galaxy© tablets.
These devices were new at the beginning of the pilot.
EVALUATION SETTING
The impact evaluation was conducted in Shiwa N’gandu District. The m-tech enumeration process
evaluation was conducted in Mungwi District, while the paper enumeration process evaluation was
conducted in Serenje District.
Shiwa N’gandu is a rural district in Muchinga Province. Since becoming an independent district in 2012,10
the area has experienced an influx of government personnel and other resources. However, it remains
extremely under-developed. The social welfare office in Shiwa N’gandu lacks transportation, and like
many rural areas in Zambia, poor mobile network coverage hinders communication.
Mungwi is a rural district located in Northern Province about 20 kilometers from Kasama. Its proximity to
Kasama enables Mungwi to benefit from support from Provincial officials during enumeration and other
intense work periods, though poor network coverage makes communication a challenge throughout
much of the district.
Serenje District is located in Central Province. It includes both rural and urban areas and is the most
developed district in the evaluation. The SCT program has also been active in Serenje for longer than the
other two districts – enumeration first started in late 2011, and the first payments were distributed at the
end of rainy season of 2012. Households have been included on the program under different targeting
mechanisms in the past, and some CWACs in the district were involved in a prior impact evaluation of the
SCT program.11
Project Timeline
Data collection for the evaluation started on October 17, 2014 and ran until March 12, 2015 (Figure 2).
The schedule was largely determined by enumeration activities in the three evaluation districts. Data
collection for the impact evaluation took place throughout enumeration, while data collection for the m-
tech process evaluation was mostly completed before and after enumeration. The evaluation team
10 Shiwa N’gandu was formerly a Constituency within Chinsali District. 11 The enumeration that took place in late 2014 in Serenje was not part of the 2014 scale-up of the SCT program, as Mungwi and Shiwa N’gandu were. It was aimed at enumerating areas that had been excluded from the program earlier because they were included in the control group for the SCT program impact evaluation. Because the prior impact evaluation did not affect targeting or enumeration and did not directly engage district staff, it did not present major challenges for this evaluation. Serenje was chosen as the paper process evaluation district because control CWACs were enumerated between December 2014 and February 2015, after enumeration for the 2014 scale-up was completed. This timeline allowed the evaluation team to conduct exit interviews and gather process data – such as photographs of CWAC Form 2s and SCT Form 1s as the enumeration was taking place there.
12
collected data for the paper process evaluation as enumerators completed their workload. The
enumeration schedule in Serenje was more staggered, so enumerators finished at different times
throughout January and February. Stakeholder interviews were completed with beneficiaries, CWACs,
district staff, and provincial staff throughout the evaluation teams’ time in these three districts. Interviews
with Social Welfare staff at headquarters (HQ) and with CPs were largely done in Lusaka after field
activities were completed.
Figure 2. Project Timeline
OCT 2014 NOV 2014 DEC 2014 JAN 2015
5 12 19 26 2 9 16 23 30 7 14 21 28 4 11 18 25
IMPACT EVALUATION
Enumerator training
Enumeration
Evaluation data collection
M-TECH PROCESS EVALUATION
Enumerator training
Enumeration
Evaluation data collection
PAPER PROCESS EVALUATION
Enumerator training1
Enumeration2
Evaluation data collection3
STAKEHOLDER ANALYSIS
Provincial / district staff interviews
HQ staff interviews
CP interviews
Notes: 1 Enumerator training for Serenje enumerators took place earlier in 2014. 2 Enumeration proceeded slowly in Serenje due to delays in completing CWAC Form 2. Additionally, enumeration was suspended in January because of the Presidential Bi-Elections and resumed in February. 3 The evaluation team returned to Serenje in the third week of February to conclude field work. Enumeration in one CWAC had not yet concluded at this time.
The remainder of the document presents the methods and results of the impact evaluation, the process
evaluations, and the stakeholder analysis, followed by a discussion and conclusion.
Ethical Approval
The evaluation’s research protocol was approved by the ERES Converge Institutional Review Board (IRB)
Board in Lusaka, Zambia. Additional approval letters to interview and accompany government
enumerators, as well as to access SCT data, were provided by MCDMCH.
14
IMPACT EVALUATION
The impact evaluation focused on two primary questions when comparing paper and m-tech
enumeration:
1. Which enumeration method is less prone to error?
2. Which enumeration method is faster?
Sources of Error in Enumeration
Eligibility and enrollment in the SCT program is based on the data collected during the enumeration
process. Improvements in the accuracy of the data collected enable the program to better reach the most
vulnerable households.
There are three types of error that can occur during enumeration:
1. Recording error: Occurs at the time of the interview when the enumerator records an incorrect
value after misinterpreting the respondent or from making a simple mistake in writing or typing.
KEY FINDINGS
1. M-tech enumeration resulted in a statistically significant reduction in the number of survey questions omitted – or not asked out loud to the respondent – during the interview.
2. M-tech enumeration produced no significant difference in error rates compared to paper enumeration, as measured by differences between answers heard on recordings of the interviews and answers captured in the paper data or in the m-tech database. No significant difference was detected when restricting the analysis to key eligibility questions.
3. Paper enumeration was more vulnerable to missing values and incorrect skip patterns.
M-tech, on the other hand, resulted in more out-of-range values, such as impossible quantities of livestock owned, and illogical values, such as households with no head of household or household members with illogical ages. M-tech weaknesses in this area largely resulted from a design flaw of the application that required each household member to be registered separately.
4. M-tech interviews take slightly longer than paper interviews. This may be because the most time consuming parts of the paper form can be completed before or after the interview with paper or because some of the m-tech features caused delays when network coverage was poor. The method of enumeration does not affect the number of household visits an enumerator can complete in a day.
15
2. Process error: Occurs when the enumerator fills out the form incorrectly by neglecting to record
a response, by failing to adhere to skip patterns, or by recording illogical or inconsistent values.
3. Data entry error: Occurs when the data entry clerk incorrectly transfers values from a paper
survey to an electronic database and results from either illegible handwriting on the paper or
typing error.
M-tech enumeration is often assumed to be less vulnerable to error. Electronic forms can be designed to
prevent process errors by requiring responses, pre-programming skip patterns, and restricting responses
to a range of acceptable answers. Additionally, m-tech enumeration typically does not require the
enumerator to encode the response (i.e. write “0” for “no”, “1” for “yes”), as is often needed for paper
surveys. Finally, since there is no separate data entry, data entry error is eliminated.
However, errors can still occur with m-tech enumeration. M-tech may be more vulnerable to recording
error at the time of the interview, since it may be easier to select the wrong answer or to mistype a
response than it is to write an incorrect response on paper. This is especially true if enumerators are not
well-versed in the use of mobile technology.
Outcomes of Interest
Primary outcomes:
Data quality: Two measures of enumeration error were captured:
Recording errors: The number of questions answered that were different between what
was said at the time of the interview and what was recorded either in the m-tech
application or on paper.
Process errors: The number of missing questions, incorrect skip patterns, and illogical
values in the m-tech and paper databases.
Enumeration time: Two measures of enumeration time were captured:12
Interview time: The time needed to conduct each interview.
Daily caseload: The number of interviews an enumerator completed in a single day.
Secondary outcomes:
Data entry error: The number of errors introduced at the time of data entry of the paper forms.
12 Another dimension of time is the duration of the full enumeration process – this is examined in the process evaluation.
16
Study Design
The impact evaluation used a cross-over design to measure differences in performance and timing with
m-tech and paper enumeration. Each enumerator used both m-tech and paper during enumeration,
enabling enumerators’ performance with one method to be compared to his or her own performance
with the other. This approach provided a more precise measurement of the underlying difference in speed
and accuracy between the two methods, while mitigating any confounding effects of inherent enumerator
skill and other characteristics, most of which are intangible and difficult to measure.
Enumerators’ caseloads were divided into two groups of households – one of which would be enumerated
with paper and the other with m-tech.13 To account for natural improvements in speed and accuracy that
occur over time, enumerators were randomly assigned the method that they used first.14
To ensure compliance with these assignments, enumerators assigned to start with paper surveys were
not provided tablets until they had completed their paper caseload. Similarly, those assigned to start with
m-tech were not provided with the list of paper households until they had completed m-tech
enumeration. The household lists that were pre-populated on the tablet only included those households
assigned to be enumerated with m-tech, and the paper lists only included households assigned to be
enumerated with paper. Enumerators assigned to m-tech for their second group of households were given
a brief refresher training by the evaluation team on how to use the tablet and application after completing
their paper caseload.
Sample Size
The impact evaluation was designed to detect a difference in error rates of four questions between paper
and m-tech surveys. The a priori power calculation parameters were:
Mean number of errors per electronic survey: 1 error
Standard deviation of an enumerator’s error rate: 4 errors
Minimum detectable effect size: an average difference of 4 errors between methods
13 District staff assigned two or three CWACs to each enumerator. Then, based on geography and logistics, they decided in which order CWACs should be enumerated. For enumerators with two CWACs, households in the first CWAC were assigned to Round 1 and households in the second CWAC were assigned to Round 2. For enumerators with three CWACs, the households in the middle CWAC were arbitrarily divided into two groups, half of which would be enumerated in Round 1 and the other in Round 2. In one case, none of the households in one of an enumerator’s two CWAC’s qualified for an enumeration, so the households in the remaining CWAC were randomly divided between Round 1 and Round 2. 14 On the last day of training, enumerators participated in a public lottery to determine which method they would use first. Pieces of paper that said “m-tech first then paper” and “paper first then m-tech” were placed in a bag. Since there were an uneven number of enumerators, a coin was flipped to determine which group would have an extra member. Each enumerator then chose a piece of paper from the bag to determine the order of their enumeration. The DSWO chose on behalf of enumerators who were not present at the time of the lottery.
17
90% power; 5% significance
Power calculations estimated that the impact evaluation should include 20 enumerators and use data for
a minimum of 20 paper surveys and 20 m-tech surveys per enumerator. Since the assumptions that
informed these calculations were based on estimates and because including additional surveys in the
sample would not pose additional burden or risk to participants, the evaluation team aimed to include 40
paper and 40 m-tech surveys for each enumerator in the impact evaluation district.
Power was a limitation in this evaluation, and ultimately the target sample size was not achieved. When
designing the evaluation, the evaluation team was told that the caseload for each enumerator would be
more than 150 households. However, when training began, it was revealed that actual enumerator
caseloads ranged from 33 to 206 households. As a result, the sample was reduced from 40 to 10
recordings per method per enumerator.15 According to the a priori assumptions, the evaluation was
designed to detect a minimum difference of four errors between the two methods. Back power
calculations using the evaluation data indicate that the minimum detectable difference with the sample
included in the evaluation is 5.4 errors.
This issue is discussed further in the limitations section. Back power calculations are presented in Annex
A.
Data Collection
The evaluation team was present for the entirety of the supervisor and enumerator training in Shiwa
N’gandu. During the enumerator training, the evaluation team introduced themselves and the evaluation
process to the enumerators. Enumerators were read an informed consent and invited to participate; all
enumerators were included.
Enumerators were provided with audio recorders and asked to record all interviews (This procedure is
described further in the next section). The evaluation team trained enumerators on the use of the
recorders and on the process that enumerators should use to ask permission from households to record
interviews. Otherwise, m-tech training proceeded as it would have in the absence of this evaluation.
Once the enumeration process began, the evaluation team was in close contact with enumerators,
meeting them in the field every two to five days. Evaluation staff also periodically joined enumerators on
their visits to potential beneficiary households.
15 The compressed evaluation schedule ruled out the possibility of adjusting the evaluation design. While this sample size is one fourth the size of the anticipated 40 recordings, it is still one half of the sample size that was suggested by a priori sample size calculations.
18
Sources of Data
Final Enumerated Datasets
The evaluation team used m-tech data that were downloaded from the CommCare application as they
were collected on the tablet at the time of the SCT Form 1 interview. Data from Shiwa N’gandu were
compared to data gathered on audio recordings of the interviews to calculate recording errors, and data
from all three pilot districts – Shiwa N’gandu, Mungwi, and Mwense – were used to examine process
errors. Dimagi also granted access to metadata, including timestamps and form edits. Additionally, the
evaluation team liaised with MCDMCH HQ officials responsible for the MIS to access data from Shiwa
N’gandu’s paper forms after they were entered into the system.
Interview Recordings
Enumerators in the impact evaluation district were provided with voice recorders and asked to audio
record each of their interviews, regardless of the enumeration method they were using. All household
respondents were asked their permission to record the interview and were assured that their agreement
neither positively nor negatively affected their eligibility for the SCT program.
A maximum of ten recordings per method per enumerator were transcribed by the evaluation team in
the language in which the interviews were conducted (Bemba).16 The team used these transcriptions to
fill out SCT Form 1 according to what was said in the interview. In addition to recording beneficiary
responses, the evaluation team noted which questions were not asked aloud on the recording.
Photographs of Paper Forms
To examine how data collected by the paper method evolved throughout the paper process, photographs
of the paper forms were taken at two points in time:
Immediately after the enumerators’ first household visit to capture data that were recorded at
the time of the interview.17
Immediately before the data were sent to Lusaka for data entry into the MIS system, after any
corrections were made based on back-checking and review by DSWOs and QA staff.
Data from these photographs were then double-entered into a database using EpiData (EpiData
Association, Odense, Denmark).
16 All recordings were used for the 14 enumerators with fewer than ten recordings for a given method. 17 The evaluation team met with enumerators as frequently as possible during enumeration to gather these photographs. This was essential to ensure that photographs captured the paper forms before any changes had been made. The evaluation team met enumerators every two to five days throughout enumeration.
19
Data Analysis
Definition and Measurement of Variables
Questions Omitted from the Interview
The number of omitted questions was estimated by counting the number of questions that were not heard
being asked in the recording.18 This could be different than the number of missing responses if
enumerators did not ask a question aloud but still recorded a response. For example, some household
characteristics such as type of roof could be observed without the enumerator having to ask the
respondent. Omission of questions becomes problematic when enumerators make assumptions about
what the response to a question will be without asking the respondent, such as which assets the
household owns.
Error Rates
Recording error
Recording error rates were estimated by counting the number of discrepancies between answers heard
on the recordings and either the data from the photographs of paper Form 1 (taken before they were
reviewed) or the data from the CommCare server. A question was coded as an error if the answer to the
question was 1) heard on the recording and 2) the answer heard was different from the answer noted on
the paper or in the m-tech record.19 The total number of errors was then divided by the total number of
questions heard on the recording.
The total number of household members a respondent reports in their household in the recording was
compared to the number of members noted on Form 1 in paper or m-tech.20 Each missing person was
coded as one error. As a conservative estimate, it was assumed that data collected for each captured
household member was accurate. As a sensitivity analysis, each missing household member was coded as
nine errors instead of one, since there are nine questions asked about each household member.
18 If a question was not heard in a recording, this does not necessarily mean that the question was not asked. The question could have been asked while the recorder was off, or it could have been obvious from context, but it provides a useful proxy. 19 The analysis was restricted to assets, livestock, length of time that a household has lived in that location, house materials (roof, floor wall), type of energy used for lighting and cooking, main toilet facility, social programs from which the household benefits, and food security. The analysis did not consider string variables such as names or administrative variables, including NRC numbers, (since these were not often asked aloud). Additionally, the amount of land owned and cultivated was omitted because respondents often report these figures in lima, but the survey records them in hectares, requiring enumerators to convert the figure into hectares. Since these conversions may be approximations made in the field, it would be difficult to calculate an exact error rate. Finally, food security questions were converted to binary variables since many recordings only captured a respondent saying “yes” to these questions with no further probing by the enumerator. 20 The measure was constructed in this way, rather than examining each individual data point, because household members might not have been noted on the paper or electronic versions of SCT Form 1 in the same order they were discussed in the recording.
20
Process Error
The number of missing values, incorrect skip patterns, illogical responses, and out-of-range values were
counted for each form in the MIS database and in the CommCare database.
Data entry error
Data from a subset of ten photographs per enumerator21 taken after the QA check (the second photo time
point) per enumerator were selected for data entry. All photos were double-entered. Data entry error
rates were estimated by counting the number of discrepancies between the data entered from the
photographs and the data in the MIS database.
Enumeration Speed
Duration of interviews were calculated from interview recordings, and these estimates were then
compared to each other. Additionally, enumerators were asked to self-report how long interviews took
with each method. To provide additional context for these measurements of individual interview length,
we also compared the number of households each enumerator visited per day with each method.
Covariates
Enumerator characteristics, including sex, age, education level, and occupation were self-reported. These
were collected at the time of the enumerator exit interview. This method is described further in the
Process Evaluation section.
Statistical Methods
All outcomes were compared using a multilevel mixed model. A mixed model was chosen because it allows
for both random and fixed effects in a repeated measure, but unlike analysis of variance (ANOVA)
techniques, it accounts for different numbers of measures for each enumerator and for each method.
The primary analysis used the data captured at the time of the interview for both methods – as they are
entered on the tablet and as they are written on the paper. However, the paper process allows for some
correction after data collection in both the QA check and when entering the data (though data entry may
also introduce error, itself). To account for this, a sensitivity analysis was conducted in which error rates
for paper were calculated by comparing data from recordings to data that would be available to the end
user in the MIS.
21 Up to ten photos were randomly selected per enumerator from the photos captured before corrections were made at the district. Three enumerators had fewer than ten photos captured at this stage, so the maximum number of photos available for those enumerators were captured. For ten enumerators, additional photos were entered on top of the randomly selected photos, because these matched to recordings. These additional photos were also included in the analysis.
21
All statistical analyses were conducted using Stata version 13 (StataCorp LP, College Station, Texas).
Statistical significance was set at p-value < 0.05.
Results
The 19 enumerators in Shiwa N’gandu collectively enumerated 471 households with m-tech and 477
households with paper.22 Nine enumerators were randomly assigned to use paper first and ten were
assigned to use m-tech first. These enumerators were balanced across gender, age, education level,
occupation, prior use of a smart phone or tablet and prior data collection experience. A more detailed
description of enumerator characteristics and balance checks are included in Annex B. Enumerator
Characteristics.
Questions Omitted from the Survey
We analyzed 239 recordings – 98 for interviews enumerated with paper and 141 for interviews
enumerated with m-tech. Of the 19 enumerators, 13 (68.4%) had at least one recording for both
methods, while four (21.1%) enumerators only had recordings for m-tech. Two enumerators (10.1%)
had no recordings for either method.
On average, 16.3 (SD = 11.2) (out of 160) questions were omitted on the survey when paper was used
compared to an average 12.6 (SD = 10.0) questions with m-tech methods (Table 1). The omission of
questions that produce information used for the PMT calculation was rare and occurred at similar
frequencies between methods.
Table 1. Omission Rates with Paper and M-Tech Enumeration Methods
Type of Question Omission Paper M-tech
N Mean (SD) N Mean (SD)
Total questions omitted 98 16.31 (11.16) 141 12.57 (10.01)
PMT questions omitted 98 0.55 (1.00) 141 0.56 (1.38)
The multi-level mixed model showed that enumerators omitted statistically significantly fewer questions
when working with m-tech than when working with paper (p-value = <0.01) (Table 2). There was no
statistically significant difference between the number of PMT-relevant questions skipped with paper and
m-tech enumeration.
22 These estimates are based on the number of records in the CommCare and MIS databases.
22
Table 2. Mixed Model Results of Effect of Enumeration Method on Omitted Questions
Total Omitted Questions Omitted PMT Questions
Coeff 95% CI P-value Coeff 95% CI P-value
M-tech first -8.54 [-15.04, -2.04] 0.01 -0.65 [-1.45, 0.15] 0.11
Use of m-tech -7.02 [-10.11, -3.93] < 0.01 -0.20 [-0.59, 0.18] 0.30
Interaction of assignment and method 8.34 [3.22, 13.47] <0.01 0.41 [-0.23, 1.04] 0.21
Constant 20.63 [16.57, 24.69] < 0.01 0.94 [0.44, 1.44] <0.01
Obs N=239 Obs N=239
Since the m-tech application requires enumerators to ask questions in a specific order, it would make
sense that they would ask more questions aloud. The paper form, on the other hand, allows the
enumerator to skip between questions on the form and to group questions together. For example, it may
be much easier for an enumerator to ask if the household owns any livestock, and when a respondent
says, “Yes, we have three chickens”, the enumerator fills in all responses for the livestock section. With
m-tech, on the other hand, the enumerator would have to recall the response while swiping through the
different questions.
Recording Error Rates
Enumerators made recording errors (errors in accurately capturing the beneficiary’s response on the
tablet or paper) at similar rates with paper and m-tech enumeration (Table 3). Paper produced a recording
error for an average of 11% (SD = 15%) of the questions where the answer could be heard on the recording
compared to 7% (SD = 10%) of questions with m-tech. (Differences in the number of questions heard on
the recording is addressed in the previous section.)
Table 3. Error Rates with Paper and M-Tech Enumeration Methods
Type of Recording Error Paper M-tech
N Mean (SD) N Mean (SD)
Error Rate 98 0.11 (0.15) 141 0.07 (0.10) Difference in number of household members1
98 -0.48 (1.18) 141 -0.30 (1.81)
Error with PMT questions 98 0.08 (0.18) 133 0.07 (0.12) Notes: 1Difference was calculated as the number of household members on the recording minus the number for which there were data on paper or on m-tech.
According to the mixed model which adjusted for which method was used first, the overall error rate was
3.0 percentage points lower with m-tech compared to paper, a difference that was not statistically
significant (p-value = 0.11) (Table 4).
23
Table 4. Mixed Model Results of Effect of Enumeration Method on Error Rates
Total Error Rate Difference in # HH Members PMT Error Rate
Coeff 95% CI
P-
value Coeff 95% CI
P-
value Coeff 95% CI
P-
value
M-tech first -0.01 [-0.07, 0.04] 0.63 -0.00 [-0.72, 0.72] 1.00 0.04 [-0.03, 0.10] 0.27
Use of m-
tech -0.03 [-0.07, 0.01] 0.11 0.04 [-0.50, 0.57] 0.89 -0.01 [-0.06, 0.04] 0.77
Interaction
of
assignment
and method
0.00 [-0.06, 0.07] 0.95 0.22 [-0.65, 1.09] 0.62 -0.02 [-0.10, 0.06] 0.62
Constant 0.11 [0.08, 0.14] < 0.01 -0.47 [-0.87, -0.07] 0.02 0.07 [0.04, 0.11] <0.01
Obs N=239 Obs N=239 Obs N=231
Errors in enumeration are most pertinent if they affect whether a household qualifies for the SCT benefit.
Therefore, we compared the error rates among key eligibility questions.
The number of household members is used to confirm the household’s dependency ratio. The difference
in the number of household members was 0.5 (SD = 1.2) household members lower on the audio recording
than it was on the paper, while the difference was 0.3 (SD = 1.8) household members lower on the
recordings compared to m-tech. The mixed model failed to detect a statistically significant difference in
the number of household members captured according to the enumeration method.
Finally, data from ten questions on the form are used to calculate the PMT, one of the eligibility tests. The
average error rate among questions used to calculate the PMT was around 8% for both methods (Paper:
mean = 8%, SD = 18%; M-tech: mean = 7%, SD = 12%). The mixed model indicated no difference in the
number of household members that were left off or added to the data, nor did it find a difference in the
error rate among variables for the PMT.
Sensitivity Analyses
Since the paper process allows for some correction after data collection in both the QA check and when
entering the data (though data entry may also introduce error, itself), we also compared what was said
during the interview to the data that would be available to an end user – that is, as it is in the MIS. For m-
tech, these were still the data as they were entered into the tablet. These results were similar to those
presented in the primary analysis (Table 5).
24
Table 5. Mixed Model Test of Effect of Enumeration Method on Error Rates Using MIS Data
Total Error Rate Difference in # HH Members PMT Error Rate
Coeff 95% CI P-value Coeff 95% CI
P-
value Coeff 95% CI P-value
M-tech
first 0.00 [-0.04, 0.05] 0.94 -0.22 [-0.90, 0.47] 0.54 0.02 [-0.04, 0.07] 0.58
Use of m-
tech -0.01 [-0.04, 0.03] 0.76 -0.23 [-0.80, 0.35] 0.44 0.00 [-0.04, 0.04] 0.96
Interaction of assignment and method
-0.02 [-0.06, 0.03] 0.55 0.38 [-0.50, 1.26] 0.40 -0.00 [-0.07, 0.06] 0.96
Constant 0.09 [0.06, 0.12] < 0.01 -0.11 [-0.51, 0.29] 0.60 0.06 [0.03, 0.10] <0.01
Obs N=239 Obs N=239 Obs N=231
Since the issue of differences in household members is such an important component of the interview,
we also redid the analysis counting each household member as nine errors, since there are nine associated
questions, rather than as one error as in the primary analysis. While this resulted in a change in the
direction of the coefficient, meaning that the estimate of the error rate with m-tech was higher than that
of paper [95% CI: -0.06, 0.07], the magnitude was essentially zero, and this difference was not statistically
significant (p-value = 0.90) (Table 6).
Table 6. Mixed Model Results of Effect of Enumeration Method on Error Rates with Penalty for Missing
Members
Total Error Rate with HH Member Penalty
Coeff 95% CI P-value
M-tech first -0.03 [-0.12, 0.07] 0.56
Use of m-tech 0.00 [-0.06, 0.07] 0.90
Interaction of assignment and method
-0.02 [-0.13, 0.08] 0.65
Constant 0.19 [0.13, 0.24] < 0.01
Obs N=239
Process Error Rates
M-tech and paper-based methods appeared to be vulnerable to different types of process errors. Data
collected with m-tech methods had fewer missing values and fewer incorrect skip patterns compared to
those collected on paper (Table 7). Of the 447 records in the MIS database for Shiwa N’gandu district,
6.7% (N = 31) had at least one missing value, and 11.7% (N = 54) had at least 1 incorrect skip pattern. In
25
contrast, of the 7,425 records from the three pilot districts in the CommCare database, 5.9% (N = 436) of
forms had at least one missing value and 8.7% (N = 649) had at least one incorrect skip pattern. This
difference is likely a result of the pre-programmed field requirements and automated skip patterns in the
m-tech application that did not exist on paper. The most significant implication for paper-based data was
that more households that were enumerated with paper were missing an NRC number for the head of
household, the main recipient, or the next of kin.
Table 7. Process Errors by Enumeration Method
Type of Process Errors
Paper1 M-tech2
N=477 N=7425
n (%) n (%)
Missing values
0 445 (93.29) 6989 (94.13)
1-2 29 (6.08) 394 (5.31)
3-4 3 (0.63) (0.00)
≥ 5 (0.00) 42 (0.57)
Skip patterns
0 421 (88.26) 6776 (91.26)
1-2 44 (9.22) 646 (8.70)
3-4 10 (2.10) 3 (0.04)
≥ 5 2 (0.42) (0.00)
Out of range values
0 477 (100.00) 7318 (98.56)
1-2 (0.00) 82 (1.10)
3-4 (0.00) 22 (0.30)
≥ 5 (0.00) 3 (0.04)
Illogical values
0 452 (94.76) 5854 (78.84)
1-2 18 (3.77) 1539 (20.73)
3-4 5 (1.05) 23 (0.31)
≥ 5 2 (0.42) 9 (0.12)
Head of household
No head3 0 (0.00) 1393 (18.76)
1 Head 477 (100.00) 5912 (79.62)
> 1 Head3 0 (0.00) 120 (1.62)
No family members4 0 (0.00) 338 (4.55)
Missing NRC number4 14 (2.94) 68 (0.92) Notes: 1As the data appeared from the MIS 2Includes data from all three m-tech districts 3Also included as a missing value above 4Also included as an illogical value above
26
However, M-tech data were more vulnerable to out-of-range or illogical values. Nearly all errors were a
consequence of separating the data collected on household characteristics and the data collected on
household members into two forms (challenges with the design of this section of the application are
discussed in Figure 3). Of the 7,425 CommCare households that were enumerated in the three pilot
districts, 338 (4.6%) were missing family members. This could be a result of enumerators linking
household members to the wrong household or forgetting to collect these data altogether. Furthermore,
there were 22 household members that did not link to any household, and 758 household members that
linked to a total of 189 household records that had household member information, but did not have data
on household characteristics. Finally, 1,513 households in the m-tech data (20.4%) either had no
household head (N=1393, 18.8%) or multiple household heads (N = 120, 1.62%). None of the records in
the MIS data had households with no family members, family members linking to households with no
household characteristics, or households where no member was listed as the household head.
Finally, the process errors for paper were calculated based on data in the MIS rather than data on the
paper forms. Missing values, incorrect skip patterns, and illogical values on the paper forms may have
been corrected by the data enterers as forms were being entered in the MIS, but since these corrections
were made away from the field, it is possible that these fixes may have resulted in incorrect information
being entered.
Figure 3. Differences in Design of the Household Member Section on Paper Survey and M-tech Application
Household characteristics and
household roster questions
embedded in same form.
Open and fill
household
characteristics
form.
Save and
exit
Open and fill
Household
Member Form
Save and
exit
Repeat for each
household member
The overall m-tech process was designed to mirror the paper process, and all of the components of the paper
form are reflected in the m-tech application, but separation of the household roster and household
characteristics sections of the form in the m-tech application was a critical difference. As a result,
enumerators needed to start a new form for each household member which made the interviews longer, and
likely caused errors and inconsistencies, including households with no members, households with no
household head, and households with members but no household characteristics form.
PAPER SURVEY
M-TECH APPLICATION
27
Data Entry Errors
In the m-tech system computerized uploading of data eliminates entry error.23 Our analysis found an
average of 10.3 differences (SD = 9.7) between the photos taken immediately before forms were sent to
Lusaka and the MIS data (Table 8). Some of these differences were changes made to existing values, which
we assume were data entry error, while others were variables that were blank on paper forms and were
filled in by CSO data enterers. On average, 6.9 (SD = 8.2) values were filled in or removed by CSO data
enterers, and 3.3 (SD = 3.2) data entry errors were made per form.
While we assume that any changes made to existing values are data entry error, it is also possible that
these changes were made intentionally by CSO officers. The data entry client has drop down menus for
geography and some codes that would prevent incorrect or illogical answers. Data enterers may also edit
illogical answers themselves. While these checks and changes would produce data that are more
uniformly coded, since these changes are often not verified in the field, it is not possible to determine if
they are accurate.
Table 8. Error Rates in Data Entry
Page of Form Filling in / removing values Changing existing values Total Changes
P-value 1 Mean (SD) Mean (SD) Mean (SD)
Page 1 variables 2.49 (0.77) 2.11 (1.47) 4.60 (1.55) <0.01
Page 2 and 3 variables 2 2.62 (6.26) 2.62 (6.26) <0.01
Page 4 variables 1.83 (3.76) 1.21 (2.28) 3.04 (4.55) <0.01
Total 6.93 (8.15) 3.33 (3.22) 10.26 (9.62) <0.01
Notes: N = 214 forms
1The p-value was calculated using Student’s T-test to test if the mean total changes was statistically significantly different than 0. 2Number of changes of existing values on pages 2 & 3 was not calculated since the order in the MIS may be different than the order on the photo.
Differences in Enumeration Speed between M-Tech and Paper
We examined the length of interviews, as captured by the length of recordings, the number of households
enumerated per day, and the self-reported average interview length. However, all three of these analyses
indicate that meaningful differences in the duration of the interview are unlikely.
Interview Length
According to the interview recordings, on average, enumerators took 18.2 minutes to enumerate a
household with paper (SD = 6.0) and 22.26 minutes to enumerate a household with m-tech (SD = 8.7).
23 During the pilot, the evaluation team observed one instance when all of one enumerator’s records failed to upload to the server and cannot be found in CommCare. Further analysis is needed to determine if any paper forms were lost during the paper process.
28
Table 9. Average Duration of Interviews with Paper and M-tech
Method N Mean (SD)
Paper 98 18.17 (6.04)
M-tech 137 22.16 (8.66)
Notes: Interview length reported in minutes, by subtracting interview start time on recording from interview end time on recording.
According to the mixed model, adjusting for which method was used first, interviews conducted with m-
tech were an average of 2.8 minutes longer than interviews conducted with paper (Table 10). This
difference was statistically significant (p-value = 0.01). While this indicates that interviews conducted with
m-tech took slightly longer than interviews conducted with paper, the magnitude of the time difference
is not enough to affect enumerators’ overall work schedules. The number of households an enumerator
can complete in a day is largely determined by the distances between households, which will not be
affected by small time differences between the two methods.
Table 10. Mixed Model Results of Difference in Interview Length between Paper and M-tech
Interview length
Coeff 95% CI P-value
M-tech first 3.4 [-1.9, 8.7] 0.2
Use of m-tech 2.83 [0.69, 4.97] 0.01
Interaction of assignment and method 0.9 [-2.68, 4.48] 0.62
Constant 17.58 [14.17, 20.99] <0.01
Obs N=235
The slightly longer duration of m-tech interviews may be due to the fact that some of the most time
consuming parts of the paper form – filling in the geographic and enumerator information on page 1 –
can be completed before or after the interview. In this case, our measurement may be overestimating
the time difference between paper and m-tech enumeration, since fields of the paper form completed
before or after the interview are not captured by the length of the recording. The longer duration may
also be a result of technical challenges and slow loading times experienced with the m-tech application,
which typically occurs when the application is unable to sync and too many forms are stored on the device.
These challenges occurred frequently and are discussed more in the process evaluation section. Finally,
the evaluation team noted that some of the m-tech recordings did not include the household roster
section of the interview, either because enumerators neglected to collect this information or because
they turned the recorder off for this section of the interview. Since this problem was only noted in m-tech
interviews, it indicates that there may be systematic underestimation of the length of m-tech interviews.
This could mean that the true difference is greater than that which we estimated.
Enumeration in Shiwa N’gandu occurred over a period of 25 days. Of the 19 enumerators, 52% had no
previous experience using smartphones or tablets. Some enumerators were only enumerating for five
days, and the shortest amount of time an enumerator spent using a tablet was one day. Therefore, the
29
average time spent enumerating with the tablet during the evaluation may be higher than it would be
after the enumerator becomes more familiar with the tablet. It is possible, therefore, that the difference
in interview duration would dissipate once the enumerators had ample time to familiarize themselves
with the tablet.
Daily Caseloads
On average, enumerators in Shiwa N’gandu completed about eight household interviews per day with
paper and about seven per day with m-tech. We found no statistically significant difference in the
numbers of households visited per day between those enumerated with m-tech and those enumerated
with paper (data not shown). This is likely due to the fact that distances between households are the main
constraint on the daily workload. Additionally, delays caused by software issues with the m-tech
application may have negated any of the interview time savings usually associated with digital
enumeration. This is discussed further in the process evaluation findings.
Enumerator Perceptions
We also asked enumerators to self-report how long the interviews took with each method. Despite
enumerators’ perceptions that m-tech enumeration is faster (70% of enumerators said m-tech was faster),
there was little difference in their self-reported time for each method. Enumerators who used paper at
some point24 during the enumeration process estimated that it took an average of 23 minutes (SD 11.3)
to enumerate with paper, while those who used m-tech25 estimated that it took an average of 22 minutes
(SD 11.9) to enumerate with m-tech. In Shiwa N’gandu, the district in which enumerators experienced
both methods, enumerators reported that m-tech took 1.0 (SD 2.1) minutes less than the duration that
they reported with paper (p-value = 0.66).
Limitations of Impact Evaluation
Sample size was a critical limitation faced in this study. The evaluation team determined that a sample of
20 m-tech and 20 paper households per enumerator would provide sufficient power to detect a
meaningful difference in error rates26 and was told that about 3,200 households, or approximately 168
per enumerator, would be enumerated in Shiwa N’gandu. However, when the evaluation team arrived in
the field, they learned that only 1,056 households qualified for enumeration in Shiwa N’gandu. As a result,
enumerators’ total caseloads varied between 33 and 206 households (mean = 109.4, SD = 41.2), with the
number of households in each enumerator’s caseload for a given method varying between 6 and 75
(mean = 27.8, SD = 16.5).
24 Included enumerators in Shiwa N’gandu and Serenje 25 Included enumerators in Shiwa N’gandu and Mungwi 26 A priori power calculations determined that 20 households per method per enumerator would be sufficient. However, because including additional households would not pose any additional burden to the participants, we initially planned to aim for 40 households per method per enumerator.
30
Compounding the sample size challenges were challenges with recordings. Many enumerators struggled
to comply with the study protocol, either because the household refused to be recorded or because they
forgot (or declined) to use the recorder. With such a condensed enumeration schedule, there was little
room for correction, as can usually happen in the beginning of field operations. As a result the number of
recordings available to use with each method varied between two and 1427(mean = 8, SD = 3.4) for each
enumerator, and five of the 19 enumerators (26.3%) in the district did not have any recordings for one or
both methods. The small number and variation in the number of recordings captured with each method
for each enumerator reduces the precision of our estimate of error rate with each method and the overall
power of the study.
Additionally, though the study team emphasized that the purpose of the recordings was not to evaluate
enumerator performance, enumerators were aware that they were being recorded and may have
modified their behavior as a result or only recorded interviews when they thought they were doing a good
job. Since it is most likely that enumerators would attempt to improve their enumeration practices while
using the recorder, we can assume that any Hawthorne effects28 would result in an underestimation of
error rates. On the other hand, enumerators may have been slower and more deliberate in their
enumeration when being recorded, which could cause us to overestimate the amount of time needed for
an interview. We would not expect either of these effects to vary systematically between paper and m-
tech.
Another limitation is related to the photographs used to calculate error rates. While every effort was
made to photograph each enumerator’s paper forms as soon as possible after enumeration and again
immediately before the forms were sent to Lusaka, logistical constraints meant that there were some
forms that the evaluation team was not able to capture immediately after enumeration. This reduced
the overall sample of forms to be drawn from to calculate changes between enumeration and data entry.
While the evaluation team does not believe that there would be any correlation between the forms that
were not photographed in the field and the number of changes made at the district, there is a possibility
that excluding these forms introduced some bias.
A further limitation of the impact evaluation was tied to the study setting. Shiwa N’gandu district is among
the newest districts in Zambia, separated from Chinsali in 2012. The District Social Welfare Office and
other government institutions were still being established, which may have impacted enumeration. For
example, MCDMCH recommends that districts employ officers from other ministries as enumerators
rather than teachers. However, because of the small government presence in Shiwa N’gandu, the DSWOs
there continue to rely on many teachers to serve as enumerators. This may have affected the external
validity of this component of the evaluation.
27 Because of initial attempts to meet the original sample size, more than 10 recordings were transcribed per enumerator method for some enumerators. 28 A Hawthorne effect is a modification of one’s behavior in response to the knowledge that one is being observed.
31
Finally, all data collected by the study team from recordings and from photographs were treated as the
true values when comparing with data from the MIS and CommCare databases. While all data were
double-entered and reconciled, error could still occur.
Conclusions of Impact Evaluation
Overall, our analysis found that m-tech enumeration resulted in a statistically significantly lower number
of questions being omitted at the time of the interview. This suggests that enumerators are better at
adhering to the questions being asked since they are programmed to appear in a specific order.
Additionally, preprogrammed required fields and skip patterns resulted in fewer missing responses. M-
tech eliminated the need for data entry which did introduce significant error in the paper-based process.
However, m-tech enumeration was as vulnerable to recording errors - discrepancies between information
on recordings of interviews and noted on paper forms or in the CommCare – as paper enumeration. This
was the case across SCT Form 1 questions and when restricting the analysis to key eligibility questions.
Results underscore the errors that were introduced due to a design flaw in the application which
separated the entry of data related to household characteristics from the entry related to household
members. This likely resulted in failures to collect one of these components or to link household members
to the correct household. This will likely lead to inaccurate targeting since the household roster and key
demographic information is closely tied to eligibility.
These results suggest that features of electronic data collection can reduce or eliminate certain types of
error that are common with paper-based systems. However, electronic data collection alone, without the
use of effective programming solutions to prevent errors and inconsistencies, is not enough to ensure
accurate data.
33
PROCESS EVALUATION
Description of Paper and M-tech Enumeration & Registration Process
At the request of MCDMCH and CPs, the software developer designed the m-tech system to mirror the
paper system in most respects. Figure 4 outlines both processes as they were designed.
KEY FINDINGS
1. As currently designed, both the m-tech and paper-based processes are vulnerable to error and delays.
2. The paper enumeration process was more labor intensive than the m-tech process, particularly for District Social Welfare Officers and quality assurance teams.
3. Both the m-tech and paper processes are vulnerable to errors in which households are incorrectly included in and excluded from enumeration.
4. Most enumerators reported that the tablet was easy to use and that they preferred it to paper. The main disadvantages of the tablet cited were the technical challenges such as mobile network coverage and charging.
5. Two components of the m-tech process introduced error and caused significant delays:
Network and electricity issues.
Uploading data to the MIS, which was delayed by technical challenges, communication challenges, and unclear definition of roles and responsibilities.
6. Enumerators used a wide range of phrases to translate questions into local languages, and many phrases they used were inaccurate, ambiguous, or leading. Both m-tech and paper enumeration were vulnerable to this challenge, and to other deviations from enumeration protocols.
7. Fixed and variable cost differences of m-tech versus paper enumeration are negligible relative to the overall costs of enumeration.
34
Figure 4. Design of Paper and M-Tech Processes
There were four primary differences between the paper and m-tech processes:
1. Method of CWAC Form 2 Entry: In the paper system, data from CWAC Form 2 are entered by
district officials into an Excel template, which calculates the dependency ratio. Enumerators are
then given print outs of these Excel spreadsheets and enumerate the households on the list with
dependency ratios greater than three or with no household members who are able to work. In
the m-tech process, each enumerator enters the CWAC Form 2 data from the CWACs they have
been assigned into the CommCare application, which then automatically generates an
TrainingIdentification
(CWAC Form 2)
Enumeration
( SCT Form 1)
Quality Check
Transfer to
MISSCT Program Enrollment
TrainingIdentification
(CWAC Form 2)Enumeration & Quality Check
Transfer to
MIS
SCT Program Enrollment
Enumerators:
Enumeration protocols
SCT Form 1 components
Use of tablet
Mechanics of m-tech Form 1
CWACs and ACC:
Identification
CWAC Form 2
Completion of CWAC Form 2 (on paper)
Automatic calculation of dependency ratio
Entry by enumerators on tablet
Household interviews with electronic SCT Form 1
Syncing of tablet
Some built-in quality checks
Review of forms for missing data and errors on QA app
Error corrections
Potential eligibility list generated
Community approval of eligibility lists
Enrollment of households & initiation of SCT payments
Upload of data to MIS
Blue text highlights differences between paper and m-tech in each step of the process
PAPER PROCESS
M-TECH PROCESS
Enumerators:
Enumeration protocols
SCT Form 1 components
Mechanics of paper Form 1
CWACs and ACC:
Identification
CWAC Form 2
Completion of CWAC Form 2 (on paper)
Automatic calculation of dependency ratio
Entry by DSWO in Microsoft Excel template
Household interviews with paper SCT Form 1
Arrival of forms to the District Office at end of enumeration
Review of forms for missing data and errors
Error corrections
Preparation of forms to send to Lusaka
Data entry into MIS by CSO staff
Potential eligibility list generated
Community approval of eligibility lists
Enrollment of households & initiation of SCT payments
35
enumeration list on the tablet. Only households that qualify for enumeration are displayed on the
tablet.29
2. Data quality checks: Many data quality checks are automatically embedded into the m-tech
enumeration process due to pre-programming of the electronic form.
3. Discrete versus continuous sequencing: In the paper process, enumeration, quality checks, and
data entry occur in relatively discrete steps. DSWOs review forms only when they have a critical
mass and entry for a district does not happen until all forms arrive in Lusaka. With m-tech, the QA
application facilitates real-time review and feedback that can occur in parallel with field activities.
The database is also automatically updated each time the tablets sync to the server.
4. Transfer to the MIS: The m-tech process eliminates the need for data entry into the MIS. As the
system is currently designed, several steps are still required to transfer data from the CommCare
database to the MIS, including downloading the data from CommCare, manually transforming the
data so that they are in a format that the MIS can read, and uploading the data to the MIS.
The findings of the process evaluations are organized around this process with a focus on comparing the
two methods as they occurred for each step, as well as understanding the challenges that emerged.
Evaluation Design
The process evaluations used field observations and interviews to collect quantitative and qualitative
data. The m-tech process evaluation used information from Mungwi and Shiwa N’gandu districts, and the
paper process evaluation used information from Serenje and Shiwa N’gandu districts.
Additionally, the photographs of the original CWAC Form 2s completed by CWACs were gathered and
compared to information entered into either CommCare or the Microsoft Excel template to measure any
errors introduced at this stage. Information from these lists was also used to determine if the household
should have qualified for enumeration. Households that were enumerated with SCT Form 1s were
reviewed and compared to the households that should have qualified based on CWAC Form 2 to
determine if any households were included or omitted from the enumeration erroneously.
Finally, the audio recordings of interviews with potential beneficiaries were analyzed to measure
variations in the local language translations used during the interviews, as well as problematic interview
practices.
29 In Shiwa N’gandu, the m-tech CWAC Form 2 system was used for entry of both paper and m-tech CWACs. The evaluation team then produced a hard copy list of the households that each enumerator needed to enumerate with paper, while the m-tech application displayed only qualifying households that should be enumerated with m-tech.
36
Data Collection
Data collection activities in Mungwi and Serenje Districts were designed to minimize influence on the
enumeration process. The evaluation team was mainly present before and after enumeration, and little
to no data were collected in these districts during the enumeration process.
Mungwi District
The evaluation team was present for the enumerator training on m-tech to observe what was
covered in the training, how training was conducted, and to complete the Mobile Challenges
Survey with enumerators (described below).
The evaluation team visited Mungwi District twice during enumeration and accompanied district
and provincial officers on field visits with enumerators.
At the end of enumeration, the evaluation team returned to Mungwi to conduct exit interviews
with enumerators, conduct a focus group discussion with CWAC members, and photograph
forms.
Serenje District
No training took place in Serenje immediately prior to the start of the enumeration since
enumerators had been trained on enumeration with the paper Form 1 in July 2014.
The evaluation team visited Serenje three times to interview district officials, accompany
officials on a monitoring visit to beneficiary households, and photograph forms.
During the last visit to Serenje, the team conducted exit interviews with enumerators and a
focus group discussion with CWAC members. The team also took photographs of completed SCT
Form 1s from Serenje in Lusaka, as they were being delivered to MCDMCH.
Shiwa N’gandu District30
The evaluation team was in Shiwa N’gandu for the full duration of m-tech training and
enumeration.
The evaluation team accompanied enumerators on visits to beneficiary households,
conducted interviews with all enumerators and Social Welfare Department district officials,
and conducted focus group discussions with three CWACs.
30 Data from Shiwa N’gandu are also included in the process evaluation. Though some aspects of the enumeration process were affected by the impact evaluation, many aspects were not. Additionally, the evaluation team’s more extensive presence in Shiwa N’gandu during the enumeration offered important insights for the process evaluation. Finally, the recordings of interviews used for the impact evaluation were also used to conduct the transcript analysis.
37
Data Sources
Mobile Experience Survey
The Mobile Experience survey was administered to enumerators in Shiwa N’gandu and Mungwi to assess
their facility with the tablet and familiarity with the CommCare Enumeration Application. The survey was
part of the suite of forms in the Supervisor Application and was initially designed as a tool for supervisors
to assess their team members. The survey consisted of 26 examples of different skills or tasks that an
enumerator should be able to execute in CommCare. Questions were coded as “1” if the enumerator was
able to execute the task and “0” if the enumerator was unable to execute the task. The first seven
questions of the survey assessed basic skills with the tablets such as connecting to mobile data and
adjusting the volume. This truncated assessment was administered before the start of training in Shiwa
N’gandu. The remaining questions focused on tasks that were specific to the enumeration application,
such as navigating from the case list to a specific case, moving between questions in the survey, editing a
form, and saving a form to finish later. The full Mobile Experience survey was administered by the
evaluation team to all enumerators at the end of training and at the end of the evaluation in both Mungwi
and Shiwa N’gandu. Results across time were compared to assess how familiarity with the tablet and
application changed over the course of enumeration.
Enumerator Exit Interviews
In all three districts, exit interviews were conducted with enumerators after enumeration was complete.
Enumerators were asked questions about their background and prior experience with the SCT program,
data collection activities, experience with both the mobile technology and paper surveys (where
applicable), and field logistics. Thoughts on the supervisory and quality assurance systems were also
collected. Finally, enumerators in Mungwi and Shiwa N’gandu were asked for their thoughts on the m-
tech training, as well as their opinions of and preferences between the m-tech and paper systems.
Mobile Challenges Survey
The Mobile Challenges survey was completed with all enumerators in Shiwa N’gandu and Mungwi to
assess the frequency and magnitude of technological challenges encountered with the tablets during
enumeration. The evaluation team completed this survey at least once with each enumerator.31
Interviews with Potential Beneficiaries
A small number of potential beneficiaries in Shiwa N’gandu were interviewed to learn their impressions
of m-tech and paper enumeration and of the general enumeration process. At the end of the enumeration
visit, evaluation staff asked the potential beneficiary to participate in a short interview. The brief
qualitative questionnaire administered to the beneficiaries covered topics including their impression of
the enumerator, of the enumeration method, and of questions that were asked on the survey. It also
asked about their knowledge of the SCT program and their expectations for the outcome of the
enumeration visit. Enumerators and CWAC members were not present for this interview.
31 The evaluation team had planned to conduct this survey over the phone on a weekly basis, but because of poor network in the enumeration areas, some enumerators were only reached at the end of the enumeration process.
38
CWAC Focus Group Discussions
Five focus group discussions with CWAC members were completed. Three discussions were conducted in
Shiwa N’gandu, one in Mungwi, and one in Serenje. The objective was to learn more about the CWAC’s
role in the enumeration process, members’ perceptions of the enumeration process, and their thoughts
on paper and m-tech enumeration methods. The evaluation team purposefully included CWACs that were
neither in the immediate vicinity of the District Social Welfare Office nor extremely remote. The DSWOs
introduced the evaluation team to the CWAC leadership and helped schedule the discussions but were
not present for the discussions, themselves.
CWAC Form 2
Data from original CWAC Form 2s were compared to Form 2 data stored in computers after entry to
measure any data entry errors introduced at this stage. In all districts, photographs of the original CWAC
Form 2s were double entered by the study team using Epi-Data. In Serenje, these data were then
compared to data from photocopies of the original Excel entry of the CWAC Form 2. In Shiwa N’gandu
and Mungwi, the data from the original CWAC Form 2s were compared to CWAC Form 2 data that were
downloaded from CommCare. Two versions of CommCare data were downloaded: one that reflected the
data immediately after they were entered and another version that incorporated some changes and
corrections made by enumerators after the initial entry and reflected the final dataset.
SCT Form 1
The evaluation team collected SCT Form 1 data from the MIS for the Shiwa N’gandu households that were
enumerated with paper, from CommCare for Mungwi and Shiwa N’gandu households that were
enumerated with m-tech, and photographs of completed SCT Form 1s from Serenje.32 The households in
these datasets were matched to data from CWAC Form 2 to determine if any households were
erroneously included or excluded from the enumeration exercise.
In Shiwa N’gandu, photos of SCT Form 1 taken immediately after enumeration and immediately before
forms were sent to Lusaka for data entry were compared to each other to estimate the number of changes
and corrections made by district staff and quality assurance personnel during the process. See the impact
evaluation section for more details.
Transcripts of Recordings of Potential Beneficiary Interviews
Transcripts of recordings of interviews with potential beneficiaries, made as part of the impact evaluation
in Shiwa N’gandu, were also used to analyze variations in translation of questions in Bemba, enumerators’
approach to household roster questions, and deviations from enumeration protocols. Bemba is the
predominate local language in Muchinga province, and all interviews in the sample were conducted in
Bemba.
32 At the time of the analysis, MIS data were not available for the households enumerated in Serenje in December and January. Photographs of SCT Form 1 were used instead to determine if a household was in fact enumerated.
39
Data Bundles Receipt and Usage
Information on when data bundles were received on tablets and how enumerators used their data
bundles were gathered from the short message service (SMS) records on the tablets and the built-in
Android application that tracks data usage.33
Cost Data
The evaluation team gathered enumeration cost data from districts,34 WFP, and UNICEF. These data –
combined with information from local staff, stakeholder mapping activities, and quantitative findings –
were used to estimate the cost implications of m-tech versus paper systems.
Results
Process evaluation results for the paper and m-tech systems are presented for each step in the data
collection process (enumerator training, identification enumeration, quality check and transfer to MIS).
We then present a cost comparison of the two systems.
33 These data were only gathered for the tablets from Shiwa N’gandu district, as the tablets from Mungwi district had not been returned to Lusaka at the time of the completion of this report. 34 Cost data were available from Serenje and Mungwi districts only.
40
Step 1: Enumerator Training
Many enumerators experienced long delays between paper enumeration training and the beginning of
enumeration, though this delay was due in part to preparations for the m-tech pilot. Paper survey training
for all enumerators occurred in July 2014 in Lusaka.35 Depending on the district, enumeration began
between 2-15 weeks after training.36 Many enumerators benefitted from the fact that this was not their
first round of enumeration. While the evaluation team could not examine the degree to which information
from these trainings was retained or lost over time, long lapses between training and the start of
enumeration could introduce some risk that enumerators will forget some of the information presented.
M-tech training, on the other hand, occurred immediately prior to the start of enumeration in the pilot
districts. The Senior Social Welfare Officer (SSWO) for the Management Information System, WFP, and
Dimagi led a two day training for supervisors, followed by a three day training for enumerators. The
supervisor training introduced supervisors to the tablet and the Supervisor Application, and the
enumerator training focused on tablet operations, the solar charger, use of the CommCare Enumeration
Application, and electronic entry of CWAC Form 2.
The results from the Mobile Experience survey revealed that most enumerators had some familiarity with
tablets prior to the start of enumeration (Figure 5). The average score on the abbreviated Mobile
Experience survey, administered before training in Shiwa N’gandu, was 0.64 (SD 0.27), meaning that
enumerators could complete 64% of the tablet-specific tasks. Some skills were more familiar to
enumerators than others. Checking the cellular connection and battery level were skills enumerators were
already familiar with before the start of training, but they were less familiar with checking the time,
airtime, answering phone calls, and changing volume. This may be due to the fact that the icons for
checking the cellular connection and battery are more prominent on the screen than others.
35 The majority of enumerators in Mungwi and Shiwa N’gandu participated in this training, while only some of the Serenje enumerators attended. 36 The enumeration that began in Serenje in December is not considered part of this round.
Enumerator
Training
Identification
(CWAC Form 2)
Enumeration
( SCT Form 1)Quality Check
Transfer to
MISSCT Program Enrollment
41
Figure 5. Average Scores on General Tablet Skills Questions before the Start of Training in Shiwa N’gandu
The m-tech training effectively transmitted skills with the tablet and the CommCare applications, and
these skills were retained throughout the enumeration process (Figure 6). Data from the Mobile
Experience surveys show that the average score on general tablet questions increased from 94% (SD =
13%) at the end of training to 96% (SD = 7%) at the end of enumeration. Likewise scores on the CommCare
application-specific questions were high immediately after training with an average of 96% (SD = 6%) and
at the end of enumeration with an average of 97% (SD = 5%). Some enumerators continued to struggle
with certain CommCare-specific skills after training and enumeration. Specific skills for which the share of
enumerators able to demonstrate that skill were below the overall skills mean for “CommCare Skills” are
noted in Figure 6.
Figure 6. Average Scores after Training and at the End of Enumeration in Mungwi and Shiwa N’gandu
94% 96%
82%
100%
73%
91%96% 97% 96%
89%93%
86%
0%
20%
40%
60%
80%
100%
General SkillsMean
CommCareSkills Mean
Check Numberof Unsent
Forms
Selects CorrectBeneficiary
from List
Can Find aSepcific Detail
in the Case
Can ExitMidway
Through Form
At the End of Training (n=22) At the End of Enumeration (n=28)
42
Despite the effectiveness of the training in strengthening enumerators’ skills, 67% of enumerators
reported that they thought the training was too short, and several enumerators reported that they would
have benefitted from a greater focus on providing practical experience during the training. One
enumerator said, “They should have spent more time on CommCare because enumerators were still
consulting each other in the field indicating that they were not clear.” Another noted, “Any time there is
an introduction to a new gadget, more time should be spent so everyone is comfortable with it. Things
were not clearly explained - more time was spent on entering CWAC Form 2 than on the training of the
tablet.” Specifically, enumerators reported challenges using the application to add new family members.
Navigating between the “Register a New Household” and “Register a New Household Member” was a
significant challenge and source of error.37 It was highlighted that a longer training with a greater practical
focus could help mitigate issues such as these. One enumerator said, “Practical topics need more time.
There was need to do a pilot to see our weaknesses unlike the way it was.”
37 The impact evaluation found that many illogical and out-of-range values could be traced to confusion in navigating between these two forms. For example, 18.76% households in the CommCare data did not have a household head, and 1.62% had more than one household head. These data are presented in the Impact Evaluation Results section.
43
Step 2: Identification (CWAC Form 2)
Both the paper and m-tech systems were vulnerable to data entry and identification errors at the CWAC
Form 2 stage.
Data Entry Errors
The process for entering data from CWAC Form 2 differed between the paper and m-tech systems.38 In
the paper system, district officials entered data into a Microsoft Excel template that automatically
calculated the dependency ratio. The spreadsheet for each CWAC was then printed out and given to the
enumerators responsible for that CWAC. All households were included on the list, regardless of their
dependency ratios, and it was up to the enumerator to enumerate only those that had a dependency ratio
greater than three or no adult members fit for work. In the m-tech system, each enumerator entered the
data for his or her CWACs from the paper form onto the tablet as part of the m-tech training. The tablet
calculated the dependency ratio and generated an onscreen enumeration list of only the households that
qualified.
Table 11 summarizes the frequency of errors in the entry of household demographic information on CWAC
Form 2 by district.39 In Mungwi and Shiwa N’gandu, where the m-tech system was used, at least one error
was observed in 11.2% (N=735) of household records. In Serenje, where the paper entry system was used,
at least one error was observed in 11.0% (N=151) of households. While all of these errors were mistakes
in the numbers of household members transcribed from paper Form 2s into the tablet, we found that
relatively few of these numerical mistakes– just 0.7% (N=1) of households in Serenje and 2.1% (N=135) in
Mungwi and Shiwa N’gandu – affected whether or not a household qualified for enumeration.
38 In Shiwa N’gandu, though both paper and m-tech enumeration systems were used, all CWAC Form 2s were entered in CommCare. Lists of households designated for enumeration with paper as part of the impact evaluation were then generated from the CommCare data. 39 The six month residency question is left out of this table since it is present on the paper and CommCare versions of Form 2, but not on the excel version of Form 2 used in Serenje.
Enumerator
Training
Identification
(CWAC Form 2)
Enumeration
( SCT Form 1)Quality Check
Transfer to
MISSCT Program Enrollment
44
Table 11. Errors in the Entry of Household Demographic Information on CWAC Form 2
Mungwi (M-Tech)
N=5,158
Shiwa N'gandu (M-tech) N=1,426
Serenje (Paper) N=1,384
Total N=7,968
N % N % N % N %
Total Entry Errors
6 0 - 0 - 3 0.22 3 0.04 5 0 - 0 - 1 0.07 1 0.01 4 8 0.16 0 - 0 - 8 0.1 3 53 1.03 5 0.35 12 0.87 70 0.88 2 126 2.44 18 1.26 16 1.16 160 2.01 1 456 8.84 69 4.84 119 8.6 644 8.08 0 4,515 87.53 1,334 93.55 1,233 89.09 7,082 88.88
Forms with Errors that Resulted in a Change in Inclusion Status
Number of forms 106 2.06 29 2.03 1 0.07 136 1.71 Notes: Records missing paper, excel, or CommCare versions for Form 2 (N=1,081) excluded from this table.
The paper system for CWAC Form 2 entry was vulnerable to errors because responsibility for data entry
was concentrated on a few individuals with no system in place for back checking or error prevention.
While responsibility for data entry was shared among more individuals in the m-tech system, the m-tech
system was also vulnerable to entry errors. Of the Mungwi and Shiwa N’gandu enumerators, 45% (N=18)
had no prior experience with tablets or smart phones, which may have made them more uncomfortable
entering CWAC Form 2. One enumerator highlighted, “I was slow [entering CWAC Form 2s] because I
wasn’t used to the tablet.” Another enumerator noted, “There was a mix up in wards. Wrong wards
were entered in the CWAC forms 2s I was entering. This brought confusion in my data.” Additionally, the
order of the questions on the tablet was different than the order in which they appeared on the paper
form.40 While review of the data indicates that this may have led to errors in only 27 household records,
it still introduces unnecessary confusion to the data entry process.
Identification Errors
Two types of identification errors were possible:
1. Inclusion error: A household is included in the enumeration exercise, though the CWAC Form 2
information about that household indicates they should not have been enumerated.41
40 On the paper form, the first household composition question asks about the number of children under 19 in the household, but on the m-tech version of the form, the first question asks for the number of household members who are fit for work adults. The m-tech form was designed this way to reduce the amount of data that needed to be entered, since households without any adults who are able to work will automatically qualify, rendering entry of the remaining questions irrelevant. However, as a result, it was easy for enumerators to accidentally enter the number of children in the house in the field for number of fit for work individuals on the tablet. When asked about the process of entering CWAC Form 2 on the tablets, one enumerator said, “The way it appeared on the tablet was different than the way it appeared on the form. That was confusing.” 41 These errors were only calculated for households that had a Form 2. Households that appeared only in enumeration data were not included in this analysis since we were not able to verify the Form 2 information for these.
45
2. Exclusion error: A household is excluded from the enumeration exercise, though the CWAC Form
2 information about that household indicates that they should have been enumerated.
Comparison of CWAC Form 2 data to records of households that were actually enumerated, either from
CommCare, MIS, or photographs of SCT Form 1 taken in the field, reveals that a significant number of
inclusion and exclusion errors occurred in all three districts, though exclusion errors were more common.
The presence of so many unexplained inclusions and exclusions underscores the need for a more robust
system of supervision and accountability.
The Form 2 analysis relied on four data sources which can be divided into three categories. These are
listed in Table 12 and described in detail below.
Table 12. Data Sources for Inclusion and Exclusion Error Analysis
Mungwi Shiwa N’gandu Serenje
Original Form 2 Data Paper Form 2 Paper Form 2 Paper Form 2
Recorded CWAC Form
2 Data CommCare CommCare Excel Form 2
Enumeration Data CommCare CommCare and MIS Photos of Form 1
1. Original CWAC Form 2 data: Photographs were taken of the original paper Form 2s that CWACs
completed and submitted to the district staff in all three districts.42
2. Recorded CWAC Form 2 data: These data consisted of the Form 2 data after they had been
entered into the paper or the m-tech systems. For the paper method (Serenje), these data were
collected from the Microsoft Excel databases that contained the entered data.43 For the m-tech
systems (Mungwi and Shiwa N’gandu), these data were downloaded from CommCare.
3. Enumeration data: These are the databases that were used to confirm whether or not a
household had been enumerated. For Shiwa N’gandu, this consisted of both the MIS for
households enumerated with paper and CommCare for households enumerated with m-tech.
Data from Mungwi households appeared exclusively in CommCare. Since households enumerated
in Serenje were not yet entered into the MIS at the time of this analysis, photographs of SCT Form
1 were used for this district.
A matching procedure was then employed to first identify matches between original CWAC Form 2 data
and recorded CWAC Form 2 data. For those households that had both original and recorded Form 2 data,
42 In three CWACs in Shiwa N’gandu, original CWAC Form 2s had been lost, so versions of Form 2 that had been entered in Excel before the enumeration were used to enter into CommCare. In these cases, photographs of the Excel versions were used instead of photographs of the original CWAC Form 2s. 43 In Serenje, the electronic versions of Excel sheets were not available from district officers due to a technical problem, so the evaluation team made photocopies of the printouts from Excel and entered these into Epi Data. Excel printouts were not available for three CWACs in Serenje.
46
matches were identified with enumerated households. Figure 7 displays this procedure graphically, and it
is discussed in more detail in Annex C. Households that did not have matches were treated as follows:
Households used to calculate inclusion and exclusion errors
Appears in original and recorded CWAC Form 2 data but not in enumeration data: These
households were excluded from the enumeration; data on the original Form 2 were used to
determine if they were excluded correctly or incorrectly.
Households not used to calculate inclusion or exclusion errors44
Appears in original but not recorded Form 2 data: This could occur when a household was
never entered into Excel or CommCare, or if it was entered but the names were mis-entered.
Appears in recorded but not original Form 2 data: This could occur if a household’s
information was mis-entered.
Appears in recorded but not original Form 2 data and enumeration data: This could occur if
a household’s CWAC Form 2 information was mis-entered, and then the household was
enumerated.
Appears in enumeration data but not in either Form 2 database: This could occur in the
paper system if a household that was not on the list but was enumerated, or if the household
name was different on CWAC Form 2 and SCT Form 1. Because of the structure of the m-tech
database, this is not possible with m-tech.
Figure 7 Matching CWAC Form 2 and Enumeration Data
Table 13 shows the number of inclusion and exclusion errors by district. Overall, exclusion errors were
more common than inclusion errors, occurring in 13.5% (N=1,222) of records, compared to 2.4% (N=221)
of records for inclusion errors. Exclusion errors were most common in Serenje, where they occurred in
44 These households were missing critical pieces of information that would have been needed to conclusively determine that they had been erroneously included or excluded.
HOUSEHOLDS
ON ORIGINAL
CWAC FORM 2 HOUSEHOLDS
ON RECORDED
CWAC FORM 2
HOUSEHOLDS IN
ENUMERATION
DATA
Households
on original
and recorded
CWAC Form 2
Households on
original CWAC
Form 2, recorded
CWAC Form 2
and SCT Form 1
HOUSEHOLDS ON
ORIGINAL AND
RECORDED CWAC
FORM 2
47
21.1% (389) of cases compared to 13.2% (N=726) and 6.4% (N=107) of cases in Mungwi and Shiwa
N’gandu, respectively. Inclusion errors were also more common in Serenje, occurring in 3.8% (N=71) of
cases compared to Mungwi and Shiwa N’gandu, where inclusion errors occurred in 2% (N=108) and 2.5%
(N=42) of cases, respectively.
Table 13. Inclusion and Exclusion Errors
According to MIS / CommCare / SCT Form 1 data
Were enumerated Were not enumerated Totals
Mungwi Shiwa
N'gandu Serenje Mungwi
Shiwa N'gandu
Serenje Mungwi Shiwa
N'gandu Serenje
Acc
ord
ing
to F
orm
2 d
ata
Should have been enumerated
N=3,554 (64.5%)
N=791 (47%)
N= 567 (30.7%)
N= 726 (13.2%)
N=107 (6.4%)
N=389 (21.1%)
N=4,280 N=898 N=956
Should not have been enumerated
N=108 (2%)
N= 42 (2.5%)
N= 71 (3.8%)
N=655 (11.9%)
N=452 (26.8%)
N=357 (19.3%)
N=763 N=494 N=428
Households not part of the analysis
- N=51 (3%)
N=18 (1%)
N=470 (8.5%)
N=241 (14.3%)
N=446 (24.1%)
N=470 N=292 N=464
Totals 3,662 884 656 1,851 800 1,192 5,513 1,684 1,848 Notes: Cells shaded in blue represent inclusion errors; cells shaded in pink represent exclusion errors. Percentages are shares of total household records in that district. Households excluded from the analysis could not be matched between paper and recorded Form 2 (Excel or CommCare) databases (N=1,027), could only be found on Paper Form 1 or MIS (N=50) but not on any Form 2, were found by enumerators but were not enumerated (with an explanation noted on the form) (N=116), or were households that should have been enumerated with m-tech by an enumerator from whom no m-tech Form 1s were found in CommCare (N=33).
There are several possible explanations for inclusion and exclusion errors:
Data entry error: Errors in the entry of Form 2 data, resulting in an incorrect eligibility
determination.
Explained exclusion: An enumerator was unable to conduct the interview for a legitimate
reason – the household could not be located, a suitable respondent could not be identified, or
the respondent refused – but this was not recorded on the form according to enumeration
protocol.
Discretionary errors: District official decided to include or exclude a household at his or her
discretion.
Form lost: A household was enumerated but the paper SCT Form 1 was never entered into the
MIS.
Simple error: An enumerator, either intentionally or unintentionally, included or excluded a
household incorrectly.
48
Table 14 examines the share of erroneous inclusions and exclusions that were the result of data entry
error. Data entry error caused all 108 of the inclusion errors in Mungwi45, since the m-tech system only
allowed enumerators to complete Form 1 for households that had qualified. Data entry error was also the
cause of 64.3% (N=27) of the inclusion errors in Shiwa N’gandu.46 However, data entry error does not
explain any of the 71 inclusion errors in Serenje, where the paper system was used. Furthermore, data
entry error explains far fewer exclusion errors, accounting for less than 1% of exclusion errors in Serenje,
less than 15% of exclusion errors in Shiwa N’gandu, and less than 10% of exclusion errors in Mungwi.
Table 14. Explanations for Inclusion and Exclusion Errors
Inclusion Errors N=221 Exclusion Errors N=1,222
Mungwi Shiwa
N’gandu
Serenje Mungwi Shiwa
N’gandu
Serenje
Resulting from Data Entry Error
in Demographics Questions
N=74
(68.5%)
N=25
(59.5%) -
N=52
(7.2%)
N=13
(12.1%)
N=1
(0.3%)
Resulting from Data Entry Error
in 6 Month Residency
Question47
N=34
(31.5%)
N=2
(4.8%) - -
1
(.9%) -
Not Resulting From Data Entry
Error -
N=15
(35.7%)
N=71
(100%)
N=674
(92.8%)
N=93
(86.9%)
N=388
(99.7%)
Total 108 42 71 726 107 389
The evaluation team examined cases of exclusion errors in Shiwa N’gandu households enumerated with
paper to determine if any SCT Form 1s were lost. The team searched among the photos of SCT Form 1 for
each household that had qualified for enumeration but could not be found in the MIS. The team found
photos of SCT Form 1s for two of these households, indicating that those two forms were lost at some
point before or during the data entry process.
The presence of so many unexplained inclusions and exclusions in districts working with both the paper
and m-tech systems underscores the need for more robust checks on enumerator caseload completion in
both systems. Such checks are very important, as targeting errors can undermine the credibility of the
SCT program if community members perceive that vulnerable households are being excluded while less
vulnerable households are included.
45 31.5% (N=34) of these inclusion errors were the result of an error in the residency question. In these cases the six month residency box was left blank on the paper CWAC Form 2, indicating that a household was not in residence in that home for more than six months, but enumerators ticked the box in the tablet, resulting in an erroneous inclusion. The other 68.5% of inclusion errors in Mungwi were the result of data entry errors in the household demographics questions on CWAC Form 2. 46 The other 35.7% (N=15) inclusion errors in Shiwa N’gandu were all made by 2 enumerators while they were working with paper enumeration, with one enumerator enumerating for 14 of these and another enumerating for 1. It is possible that the enumerator who was responsible for 14 of these inclusion errors may have been using the original CWAC Form 2 instead of the enumeration list provided by the study team, which may have led to these inclusion errors. 47 The six month residency question was not included on the excel version of Form 2 used in Serenje, so it is not possible to know how many households may have been erroneously included or excluded based on an incorrect interpretation of this question there.
49
Step 3: Enumeration (CWAC Form 1)
Errors enumerating SCT Form 1 were explored in the Impact Evaluation Section. This section focuses on
challenges that enumerators faced during enumeration, perceptions of the m-tech and paper systems,
and translation variations and deviations from enumeration protocol documented in the audio recordings
of interviews.
Technical Challenges of M-Tech versus Paper
A design issue with the “Register a New Household Member” form was cited by enumerators in exit
interviews as particularly cumbersome and appeared to introduce errors into the m-tech data. SCT Form
1 collects information on household characteristics, as well as a household roster of each member of the
household. On the paper form, this household roster is embedded alongside the general household
characteristics form. In contrast, the tablet separated these two sections. In order to enter information
about each household member into CommCare, the application required that enumerators begin a new
form and link it to the correct household from the complete roster of all households in their caseload.
This made the enumeration process feel more cumbersome and may also have increased the length of
the interview, as enumerators waited for CommCare forms to reload with each household member they
entered. Additionally, this design likely led to errors, as it was easy for enumerators to accidentally assign
individuals to the incorrect household when selecting from their complete roster.
In addition to challenges with the application, over two thirds of the enumerators (N=27; 67.5%) reported
experiencing technical challenges that delayed their work. Of 36 enumerators who were asked specifically
about network challenges, only six (16.7%) reported that they never experienced network challenges,
while 12 (33.3%) reported experiencing them every day. Another eight (22.2%) reported experiencing
them at least once a week.
Additionally, many enumerators noted difficulties using the solar chargers and said that having extra
batteries would have been helpful. The solar chargers required charging during the day, which meant
that an enumerator would have to sit in one place for several hours while the battery charged. If the
tablet battery drained completely during the middle of an interview, enumerators would have to leave
and return to these households once the tablet was recharged. Additionally, because of a mismatch
between the voltage production of the solar charger and the requirements of the tablet, the chargers
would not charge the tablet if the battery had fully run out. Finally, some enumerators found that their
solar chargers were never able to charge from the sun.
Enumerator Perceptions of M-Tech versus Paper
Of the 40 enumerators interviewed who had used the tablet, a large majority reported that they found
the application somewhat or very easy to use (N=37, 92.5%) and that they would prefer using the tablet
Enumerator
Training
Identification
(CWAC Form 2)
Enumeration
( SCT Form 1)Quality Check
Transfer to
MISSCT Program Enrollment
50
over paper in the future (N = 33; 82.5%) (Figure 8). M-tech advantages that enumerators mentioned
included:
The perception that fewer mistakes could be made with it
The GPS function to prevent people from cheating
The ease of carrying the tablet
The modernity of the tablet and the feeling that it provided that their work was advanced
The disadvantages that enumerators cited were largely technical challenges. These included:
Network challenges leading to long syncing times
Failure to receive data
The cumbersome nature of adding new family members
Challenges around editing the forms
Long loading times as more un-synced forms were saved to the tablet
Power and charging challenges
While the root causes of some of these challenges – including network and power – are outside of
implementers’ control there are steps that can be taken to mitigate these challenges, discussed in more
detail in Figure 9.
Figure 8. Enumerator Perceptions of Paper and M-tech
Despite the technical challenges, most enumerators cited m-tech as being easier to use, preferred by
beneficiaries, and less error-prone versus paper. Additionally, 70% (N = 28) of enumerators thought m-
tech surveys were faster than paper surveys. The enumerators who thought m-tech was slower than
paper often related this to long loading times, technical challenges, and the separation of the household
roster and household characteristics forms, which required them to start a new form for each household
member (for more details on this design issue see Figure 3 ).
13% 18%31%
13% 15%
85% 82%69%
84% 85%
3% 3%
0%
20%
40%
60%
80%
100%
Methodprefered
Easier Faster Beneficiariesprefer
Fewer mistakes
Per
cen
tage
of
Enu
mer
ato
rs
Don't know M-tech Paper
51
A minority of enumerators preferred paper surveys over electronic surveys. Reported advantages of using
paper over m-tech included the ease of editing and the pre-existing familiarity that enumerators had with
paper surveys. The main disadvantages mentioned were the weight of carrying surveys, the risk of papers
getting wet or destroyed, and the risk of making more errors.
Figure 9. Commonly Cited Challenges of M-tech Application and Potential Solutions
Community Perceptions of M-Tech v Paper
All interviews with potential beneficiaries were conducted in Shiwa N’gandu, where the study team was
accompanying enumerators to the field. Of these 48 interviews, 30 were from households that were
enumerated with the tablet, while 18 were from households that were enumerated with paper.
When asked how they felt about the method of enumeration that was used, all beneficiaries responded
positively, primarily highlighting that the act of recording their responses – whether on paper or on the
tablet – was what was most important. One beneficiary whose interview had been done with the tablet
said, “I felt good knowing that everything I said was being entered in the tablet. I believe that the
information will not be lost or get distorted.” Another beneficiary whose interview had been done on
paper noted, “I could see her read the questions and write the answers.”
Household members section of the application. As described in Figure 3, the household
members section of the m-tech form was separated from the household characteristics
section, making the application much more cumbersome than the paper for enumerators. This
challenge can be addressed with reprogramming.
Poor network connection. Poor network in many areas where potential beneficiaries stay
made it difficult for many enumerators to sync their devices. While issues with network are
outside of implementers’ control, one way to mitigate this challenge would be to provide
internet hotspots in district offices, so that there is at least one place in each district where
enumerators can sync.
Slow loading times. When tablets were unable to sync and many forms were stored within the
application, the program became slow to load. Internet hotspots in district offices can help
mitigate this. Additionally, the developer can explore programming solutions that would allow
more forms to be stored on the device without slowing it down.
Power and Charging. Access to power was a challenge for some enumerators, and the solar
chargers provided were not a good solution. They needed to sit in the sun for long periods,
which was not conducive to enumerators’ work flow, and some of them were faulty. In the
future external batteries, which can be charged at night off of generators and car batteries
often found in many rural settings could help address this problem.
52
Interviews with potential beneficiaries also covered a range of other topics related to the SCT Enumeration
process, including their knowledge of the SCT program, and expectations about the outcomes of the
enumeration. These findings are summarized in Annex D.
Enumerators confirmed that potential beneficiaries were generally positive about m-tech enumeration.
Some enumerators noted that beneficiaries thought the tablet would do a better job of delivering their
information to the government. As one enumerator said, “[Potential beneficiaries] were willing and
receptive because they were hopeful of a quick response.” They reported that some felt that features like
the GPS and the ability to take photographs were positive.
However, enumerators also highlighted some discontent among potential beneficiaries. Four
enumerators noted that some beneficiaries saw the tablet as Satanism. Others noted that some
beneficiaries were afraid of the tablet and were wary of having their photograph taken.
In general, CWAC members saw the tablet as a positive development. They noted that GPS capabilities
and photo features were valuable additions and helpful in identifying the right people. They also noted
that it made the interviews faster than when they were done with paper. One member from Mungwi
noted that beneficiaries were under the impression that they were certain to qualify for the program once
the information was in the tablet. “They believe that the computer cannot eliminate them from the list,
thus blaming it all on the CWAC members [if they do not qualify].” Additional findings from the focus
group discussions are discussed in Annex E.
Receipt of Data Bundles and Data Usage
Analysis of SMS phone logs indicated that data were sent to the tablets on October 31 and on November
6. Of the 14 enumerator tables with information on when data bundles were sent, 92.9% (n=13) had an
SMS record of receiving data bundles on at least one of these dates.
As the summary statistics in Table 15 and histograms in Figure 10 and Figure 11 indicate, enumerators’
usage of data for both the CommCare application and other applications varied widely.
Table 15. Data Used by Enumerator Tablets
MB of Data % of Total
Mean (SD) Min Max Mean (SD)
CommCare 46.8 (46.7) 2.0 158.0 30.4% (28.6%) All Other Applications 122.7 (99.0) 7.6 373.8 69.6% (28.6%) Total 169.5 (95.4) 50.6 423.3 Note: n = 19 enumerators
Among the 19 enumerator tablets, the average amount of data used during the enumeration was 169.5
megabytes (MB) (SD = 95.4), with total usage ranging from 50.6 MB to 423.3 MB. Enumerators used an
average of 46.8 MB (SD = 46.7) of data on the CommCare application, with some enumerators using more
than 100 MB of data on the application. On average, the 75 MB that was provided at the start of
enumeration was sufficient, but 26.3% of enumerators used more than 75 MB, and 42.1% used 25 MB or
53
less.48 Since data needs can vary widely between enumerators depending on caseloads, it may be more
effective to develop guidelines for data bundle allocation based on caseloads and distribute and manage
data bundles at a local level, where officials can easily note variations in caseloads.
Figure 10. Data Used by CommCare Application
Figure 11. Data Used by All Other Applications
Supervisor Application Usage
Usage of the features in the Supervisor Application was inconsistent across Shiwa N’gandu, Mungwi, and
Mwense. In-depth interviews and evaluations team observations indicated that technical and logistical
challenges were a significant barrier to the use of these features. One supervisor noted that she could not
use the Supervisor Application because tablets were not syncing. Additionally, supervisors reported that
48 Information on data bundle usage was only available from Shiwa N’gandu. While the variation in data bundle needs in Shiwa N’gandu indicates that more tailored data allocations are necessary. The absolute amount of data used in Shiwa N’gandu during the pilot should not be used to infer future data needs, since caseloads in Shiwa N’gandu were small and only half of the households in Shiwa N’gandu were enumerated with m-tech during the pilot.
8
5
12
1 1 1
0
2
4
6
8
10
0-25 26-50 51-75 76-100 101-125 126-150 151-175
Nu
mb
er o
f En
um
erat
ors
n=1
9
MB of Data
1
3
5
3
1
0 0
1
0
3
1
0 0 0
1
0
1
2
3
4
5
6
Nu
mb
er o
f En
um
erat
ors
n=1
9
MB of Data
54
the enumerator work tracking feature of the application would be more useful if it listed the number of
households completed, rather than the number of forms submitted.49
As Figure 12 indicates, the Supervisor Application forms were used most commonly in Mungwi, where the
Performance Feedback form was completed for nearly every enumerator in the district, and the Visit
Observation form for nearly half of enumerators in the district. Mungwi supervisors were also the most
frequent users of the form for reporting a technical issue. In Shiwa N’gandu district, the Visit Observation
and Performance Feedback forms were never used, and the form for reporting technical issues was used
twice.50
Figure 12. Use of the Supervisor Application
In-depth interviews with the supervisors in Shiwa N’gandu District revealed that technical challenges were
the primary barrier to their use of these forms. The two supervisors in Shiwa N’gandu were unable to
successfully sync their Supervisor Applications with the server throughout the enumeration, which meant
that they were unable to see how many forms enumerators had submitted, the key feature of the
Performance Feedback form.51 Additionally, long distances, lack of transportation, the condensed timeline
49 Each new household that was registered as well as each new household member counted as a new form, even though they might all belong to one household. 50 Application usage information for Mwense district is included for reference, but since the evaluation did not carry out any data collection Mwense, we cannot provide further interpretation of these figures. 51 The evaluation team was not able to determine what the specific issue was with the supervisor application features in Shiwa N’gandu, but did confirm that the feature was not functioning. It appears that this was a technical problem with the application
2 2 24
9
0
16
3 0
4
5
4
0
5
10
15
20
25
Visit Observation Form Report a Technical Issue Form Performance Feedback Form
Nu
mb
er
of
Form
s C
om
ple
ted
Mungwi Supervisor 1
Mungwi Supervisor 2
Mwense Supervisor
Shiwa N’gandu Supervisor
Note: In Mungwi, two supervisors completed supervisor forms, while only one supervisor completed
them in Mwense and Shiwa N’gandu respectively.
55
of enumeration, and the fact that both supervisors in Shiwa N’gandu were also enumerating prevented
the supervisors from completing any enumerator observation visits during the enumeration.
Translation Variation and Deviations from Enumeration Protocol
Analysis of the recordings of interviews with potential beneficiaries in Shiwa N’gandu revealed that
enumerators used a wide range of phrases to translate survey questions into the local language, and that
many of the phrases were inaccurate, unclear, or leading.
The transcripts were analyzed using a three-step process:
1. The team completed an analysis form for each recording transcript. This form noted the Bemba translation of each question, the overall approach to the household roster questions, and deviations from enumeration protocol detected. The deviations from enumeration protocol that were tracked and analyzed are listed and defined below in Table 16.
2. The various translations of survey questions used were reviewed, and a codebook with a discrete code for each translation of each question was developed.
3. Translations were grouped into the same code if their phrasing and word use were similar. The team then assigned each code to one of three categories: correct, incorrect, and reference (Table 17).
or with these particular supervisor tablets, not an issue with the cellular network. There was 3g coverage in the area around the district office and mobile data features on other devices were functioning throughout the enumeration period.
56
Table 16. Deviations from Enumeration Protocol
Protocol Deviation Explanation Example
Interview Not Conducted at
Respondents Home1
Can be inferred from the recording
that interview was not conducted at
respondent’s home
n/a
Respondents Gathered
Together in One Location
Can be inferred from the recording
that respondents were gathered
together in one place
CWAC member was heard telling the
respondents to all come prepared
with their NRC numbers as they meet
the enumerator.
CWAC Member Present for
the Interview
Can be inferred that CWAC member
was present during the interview
n/a
CWAC Member Influencing
Responses
CWAC member was participating in
interview and influencing
respondent’s answers to questions
CWAC member was heard suggesting
birthdates for household members.
Question Ambiguously
Phrased
Phrasing in Bemba makes the
meaning of the question ambiguous
For question 3.12: “In the last one
month did you ever have food?”
Question Skipped Enumerator did not ask the question n/a
Inadequate/No Probing Enumerator failed to probe or failed
to probe adequately
For question 3.06: Enumerator asks if
the household has a toilet, but does
not probe on what type of toilet.
Question Asked in a Leading
Way
Enumerator phrased question in a
leading way
For question 3.03: “Your wall is made
of bricks?”
Unusual Response
Respondent gave an unusual
response – implying that they did not
fully understand the question
For question 3.12: Respondent
answers “We only eat cassava meal.”
Judgmental Comment or
Phrasing
Enumerator’s phrasing of the
question or reactions to respondent’s
answer to the question was
judgmental
Respondent says they do not have a
bed and an enumerator responds
“Then, where do you sleep?”
Wrongly Asked
Enumerator has asked question that
is unrelated to the actual question in
the survey.
For question 3.13: Enumerator asks
“If you don’t have food what do you
do?”
Incorrect Translation Enumerator’s phrasing of the
question was incorrect
For question 3.13 Enumerator asks
“In the past four weeks, did you ever
stay hungry because you didn’t have
money?”
Incomplete Translation
Translation of the question was
incomplete
For question 3.14 Enumerator asks
“Have you ever or anyone in your
household slept hungry?”
Notes: 1MCDMCH protocols require that interviews be conducted at the respondent’s home so that the enumerator can verify
information about the household living conditions. One of the most common reasons why an interview was not conducted at
a respondent home was that respondents were gathered together in a central location, however this is not the only possible
alternate location
57
Table 17. Criteria for Translation Categories
Examples of translations that fall into each of these three categories are outlined in Table 18.
Table 18. Examples of Translation Coding and Categorization System
Correct Incorrect Reference
2.05 What
is the
orphan-
hood
status of
{name}?
Code A
Banyina naba wishi epo
baba (Are the mother and
father around/alive?)
Nga aba fyashi bakwe epo
baba? (What about the
parents are they around/
alive?)
Code C
Abafyashi baliyonaika
bonse (Are both parents
destroyed?)
Uyu umwana ba nyina
balifwa nagu ba wish (Are
the mother or father of this
child dead?)
Code B
Abafyashi benu epo
baba? (Are your parents
around/alive?)
Code D
Nimwe mwamufyalilila te
mwana wanshiwa (Are you
the one that gave birth to
her? She’s not an orphan?)
2.07. What
is the
highest
grade of
school
attained
Code A
Apelele mu grade shani
isukulu (In which grade
did you end in?)
Munshiku shamasambililo
mwapelele mwisa?
(When you were
schooling, what grade did
you did you end in?)
Code C
Apwilile lilali? (When did
you complete school?)
Mwalekele mu shani
isukulu? (When did you
stop schooling?)
Code E
Mwalipwishishe yonse
form 2? (Did you
complete all of form 2?)
Nga [ishina]? (What
about [name]?)
Code B
Asambilila mu grade
shani? (What grade is
he/she doing?)
Code D
Mwalileko kusukulu (Have you
ever been to school?)
Correct: Translations that had the exact same meaning as the English used in the question on
SCT Form 1.
Incorrect: Translations that had a different meaning than the English used in the question on
SCT Form 1. This includes translations that use grammatically incorrect Bemba, and
translations that are incomplete.
Reference: For translations that enumerators used when their phrasing of a question referred
to information that the respondent had disclosed earlier in the interview, or referred to
something the enumerator had observed.
Enumeration: First week November 2014 – Last week November 2014
QA: Last week November 2014
MIS: First week December 2014 – Last Week December 2014
58
3.13 In the
past 4
weeks,
was there
ever no
food of any
kind to eat
in your
house
because of
lack of
resources
to get food
Code A
Mumilungu ine iyapita
kunuma uku, kuti
mwaibukisha ukuti
tamwakwetepo ichakulya
nangu chimo pang’anda
kumulandu nokubulilwa?
Chachitike imiku inga? (In
the last past month do
you remember a time
when you had no food in
your household because
of lack of resources? How
many times did that
happen?)
Code C
Mwalitala amwiekalapo ne
nsala umulandu wakuti
ifilyo fyachepa (Have you
ever stayed hungry because
they was not enough food?)
Mumilungu 4 iyapita,
mwalikelepo ukwabula
ukulya ichakulya? (In the
last 4 weeks did you ever
stay without eating any
food?)
Code B
Nga mumilungu ya pita
inee, kwalipo ilyo
mwaikelefye ukwabula
ukulya umulandu
wakutila at imisangile
yafilyo yayansha? (In the
last one month that has
just past, is there a time
you stayed hungry
without eating because it
was hard to find means of
finding food?)
Code D
Inga mu mweshi uyu wapiti
mwalitala amwikalapo
iyakutila at bamo balya
bambi bakana ukulya pa
mulundu waku cepelwa
kwa ifyakulya? Nangu imiku
yinga muli uyu umweshi
wapita? (In the last one
month that has just past,
was there ever a time when
others ate and others could
not eat because there was
not enough food? How
many times did that happen
in the last one month?)
Table 19 shows the average number of unique translations used by enumerators for selected survey
questions, as well as the total number of unique translations observed for each question. A complete
version of this table can be found in Annex F. Overall, enumerators used a wide range of question
translations, with 12 of 47 household characteristics questions (25.5%) translated with more than 10
different phrases. Internally, enumerators were more consistent, using an average of 1.5 to 4.25
different translations for each question over the course of the enumeration. Comparisons between
paper and m-tech do not reveal any systematic differences between enumerators’ consistency with the
two methods.
59
Table 19. Number of Unique Translations Used by Each Enumerator
Paper n=99 M-tech n= 140
Question Mean (SD) Mean (SD) P-value
Distinct Translations
3.01 Material of Roof 3.50 (1.29) 2.64 (1.50) 0.31 15
3.02 Material of Floor 3.25 (1.50) 3.00 (1.65) 0.79 13
3.03 Material of Walls 2.25 (0.50) 3.22 (1.79) 0.16 11
3.04 Energy for Light 2.50 (0.58) 2.23 (1.17) 0.55 8
3.05 Energy for Cooking 2.75 (0.96) 3.31 (1.44) 0.40 9
3.06 Main Toilet Facility 2.50 (1.73) 2.46 (1.33) 0.97 11
3.08a Land Owned 3.75 (2.36) 3.69 (1.44) 0.97 20
3.08b Land Cultivated 3.75 (2.36) 3.00 (1.08) 0.58 25 3.09 Length of Time Lived in Current Location 3.25 (1.26) 2.69 (0.85) 0.45 12
3.12 Meals Per Day 2.75 (0.50) 2.00 (1.15) 0.09 7 3.13 No Food, Lack of Resources 4.25 (1.50) 2.77 (1.36) 0.14 13
3.14 No Food, Sleep Hungry 3.25 (0.50) 3.77 (1.54) 0.31 19
3.15 No Food, All Day 2.33 (1.53) 2.92 (1.00) 0.58 12
Notes: P-values are based on a two-sample comparison of means using a Student’s t-test with unequal variance.
Table 20 shows the share of paper and m-tech surveys in which an inaccurate, unclear, or leading
translation was used for a selected subset of questions. Shares of incorrect translations for the complete
survey can be found in Annex F. Enumerators struggled to consistently translate many of the questions
on the survey, using an inaccurate or problematic translation more than 50% of the time on six questions,
and more than 20% of the time on an additional eight. Inaccurate, leading, and unclear translations are a
persistent problem when enumerators are working with both the paper and m-tech systems. Comparison
of number incorrect translations with each method shows that translation mistakes occur with similar
frequencies between the two systems.52
52 One potential source of difficulty with consistent translation, is the fact that some questions are not phrased as questions in English. This may have made it more difficult for enumerators to translate the questions directly into local language.
60
Table 20. Share of Recorded Interviews with Inaccurate, Unclear, or Leading Translation for Select
Questions
Paper M-tech
Question N % N % P-value
2.04 Date of Birth 7 7.1 10.0 7.1 0.98
2.05 Orphan hood Status 34 34.3 36.0 25.7 0.15
2.06 Attended School 13 13.1 22.0 15.7 0.58
2.07 Grade Attained 6 6.1 13.0 9.3 0.36
2.08 Fit for Work 3 3.0 3.0 2.1 0.67
2.09 Disability 31 31.3 27.0 19.3 0.03
2.10 Chronic Illness 45 45.5 23.0 16.4 <0.01
3.01 Material of Roof 43 55.1 39.0 42.4 0.10
3.02 Material of Floor 70 84.3 82.0 78.8 0.34
3.03 Material of Walls 14 19.4 21.0 23.3 0.55
3.04 Energy for Light 5 5.1 7.0 5.2 0.97
3.05 Energy for Cooking 40 42.1 57.0 44.2 0.76
3.06 Main Toilet Facility 74 85.1 94.0 79.0 0.27
3.08a Land Owned 50 54.3 72.0 55.4 0.88
3.08b Land Cultivated 26 34.7 40.0 37.0 0.74
3.09 Length of Time Lived 2 2.1 0.0 0.0 0.09
3.12 Meals Per Day 9 9.2 18.0 13.4 0.32
3.13 No Food, Lack of Resources 35 42.2 47.0 42.0 0.98
3.14 No Food, Sleep Hungry 45 52.9 59.0 52.2 0.92
3.15 No Food, All Day 33 55.0 57.0 62.0 0.39
Note: P-values were calculated using Pearson’s chi-squared test.
Finally, analysis of deviations from enumeration protocols indicates that both paper and m-tech
enumeration are vulnerable to these types of mistakes. Table 21 summarizes the comparison of the
frequencies of various protocol violations. There is a statistically significant difference (p-value = 0.02)
between the number of questions skipped between paper and m-tech, consistent with the findings of the
impact evaluation. Additionally, there is a statistically significant difference (p-value = 0.03) between the
average number of leading translations enumerators used per interview with paper enumeration, 4.41
(SD = 2.5), and the number they used with m-tech enumeration, 3.72 (SD = 2.41). There were no other
significant differences in rates of protocol violations between the two methods.
61
Table 21. Deviations from Enumeration Protocol
Paper n=99 M-tech n=140
Share of Interviews with Practice
N % N % P-value
Interview Not Conducted at Respondent’s Home 34 34.34 41 29.29 0.41
Respondents Gathered Together in One Location 32 32.32 37 26.43 0.32
CWAC Member Present for the Interview 24 24.24 44 31.43 0.23
CWAC Member Influencing Respondents 6 6.06 16 11.43 0.16
Number of Times Practice Occurs Per Interview
Mean SD Mean SD P-value
Question Ambiguously Phrased 0.16 0.65 0.04 0.22 0.07
Question Skipped 4.48 3.09 5.56 4.18 0.02
Question Not Audible 0.02 0.14 0.01 0.12 0.73
Inadequate/No Probing 0.53 0.79 0.58 0.87 0.62
Question Asked in a Leading Way 4.41 2.5 3.72 2.41 0.03
Unusual Response 0.02 0.14 0.01 0.12 0.73
Judgmental Comment or Phrasing 0.12 0.44 0.10 0.44 0.71
Wrongly Asked 0.01 0.10 0.02 0.15 0.48
Incorrect Translation 0.51 0.79 0.67 0.92 0.14
Incomplete Translation 0.56 0.85 0.75 1.03 0.11
Notes: P-values in top section of chart are based on Pearson’s chi-squared test. P-values in bottom section of chart based on t-
test comparison of means.
62
Step 4: Quality Check
Under the paper system, district review and sign off was labor intensive. According to enumeration
protocol, district staff and a privately contracted QA team were meant to review submitted paper forms
for errors, make corrections to codes, and request enumerators to return to households to fill in any
missing data. While the evaluation team did observe the QA team reviewing forms in the district office,
and making some field visits, they did not observe any cases of enumerators returning to the field to
correct information.
To measure the intensity of the corrections made after interviews, we compared photo images of the
forms immediately after enumeration to photo images of the forms immediately before they were sent
to Lusaka. This analysis shows that, on average, a total of 8.1 changes (SD = 11.7) were made per form. Of
these changes, an average 6.3 (SD = 11.0) were the addition or removal of values from the form and an
average of 1.8 (SD = 2.1) were changes to values on the form. The majority of the changes (mean = 4.1,
SD = 3.4) were made on page 1, which is consistent with the evaluation team’s observations that many
enumerators did not know geographic codes and had to consult supervisors or return to the district office
to fill these in.
Table 22. Changes to Forms between Enumeration and End of District Review, By Page
Page of Form Filling in / removing values Changing existing values Total Changes
Mean (SD) Mean (SD) Mean (SD)
Page 1 variables 3.37 (3.52) 0.91 (0.93) 4.08 (3.42)
Page 2 and 3 variables 1.75 (7.60) 0.58 (1.52) 2.33 (8.09)
Page 4 variables 1.39 (4.66) 0.33 (0.78) 1.73 (4.80)
Total 6.31 (11.01) 1.82 (2.08) 8.13 (11.65)
Notes: N = 201 forms
The m-tech system was designed to allow enumerators to return to previously completed forms and make
modifications to their work. Reviewing the CommCare data shows that fewer corrections are made with
m-tech than with paper. On average, only 0.78 of a question (SD = 6.06) on the household characteristics
portion of the survey was modified per household over the course of enumeration. This figure includes
corrections enumerators made as well as instances when an enumerator exited a form before finishing
and returned to it at a later date. In addition to these modifications enumerators made themselves, an
application was also designed that allowed QA teams to review forms as enumerators uploaded them to
the system and make or suggest corrections. However, in the pilot, the applications designed for this
function were hampered by network challenges.
Enumerator
training
Identification
(CWAC Form 2)
Enumeration
( SCT Form 1)Quality Check
Transfer to
MISSCT Program Enrollment
63
Step 5: Transfer to MIS
According to Zambia’s CSO, data entry for all of the districts using paper enumeration took an average of
two months.53 Paper SCT Form 1s were transported to Lusaka by district staff or district drivers. CSO then
enters paper SCT Form 1s into computers using a structured query language (SQL) database with built-in
checks and skip patterns that help mitigate errors. Rather than entering values, data entry staff use drop
down menus to select the answer as it is recorded on paper. There was still a significant number of
discrepancies between the data from the forms as they appeared upon arrival at CSO and what was
entered into the MIS. This suggests that despite these built-in quality checks, data entry still introduces
error in the paper process.
The m-tech system, when functioning as intended, should eliminate the need for separate data entry
altogether due to automatic syncing with the end database. As a result, m-tech has the potential to
reduce the time required to produce the eligibility list by over two months (Figure 13).
Figure 13. Expected Timeline of Paper and M-tech Enumeration by Design
However, in the pilot, extreme delays have been experienced in loading the data into the MIS – in fact,
3.5 months after enumeration, the data still had not been analyzed in the MIS.54 Since the MIS is not
cloud-based, a system of downloading the data from the CommCare database and uploading them was
required. Communication challenges and poor definition of roles and responsibilities among the
contractors responsible for the MIS, the contractors responsible for the CommCare application,
government, and CPs resulted in extensive back and forth over this process and introduced further delays
(Figure 14 & Figure 15).
53 The time to complete data entry varies based on the workload at CSO. However, according to managers at CSO, data enterers enter an average of 40-50 forms a day, and an exceptionally fast enterer can enter up to 90 forms a day. 54 As of the date of the drafting of this report (March 15, 2015).
Enumerator
Training
Identification
(CWAC Form 2)
Enumeration
( SCT Form 1)Quality Check
Transfer to
MISSCT Program Enrollment
2-4 weeks 2-4 weeks 2 months 3-5 days
Eligibility list Eligibility list
Identification Enumeration M-TECH
QA Identification Enumeration Transfer to MIS PAPER
64
Figure 14. Actual Timeline of Paper and M-tech Enumeration
Additionally, in order to be uploaded to the MIS the data needed to be formatted in a specific way, with
variables named and ordered according to specific conventions. The data also needed to have consistent
attributes such as each household having a household head, and no missing rows. Since the data were not
automatically exported from CommCare in this format, they needed to be manually cleaned, reorganized,
and manipulated in order to be imported into the MIS.
Figure 15. Enumeration and Data Entry Dates
3 months
Eligibility list
SERENJE1
4 weeks 3 weeks 4 weeks 3-5 days
MUNGWI Identification Enumeration
Still waiting
MIS
2 weeks 5 weeks
QA Identification Enumeration MIS
1Shiwa N’gandu’s data entry timeline was used since the delayed start of enumeration in Serenje meant that data were not ready
for entry when this report was being drafted.
Serenje
Enumeration: First week in December 2014 – First Week in January 2015; Last week in
January 2015 – First Week in February 2015.1
QA: Intermittent2
MIS: Data not yet entered at time of drafting of this report.3
Mungwi
Enumeration: Third week October 2014 – Third week November 2014
QA: Embedded in enumeration.
MIS: First Week in December 2015 – Third week in March 2015
Shiwa N’gandu
Enumeration: First week November 2014 – Last week November 2014
QA: Last week November 2014
MIS: First week December 2014 – Last Week December 2014
1Enumeration in Serenje was interrupted by the Presidential election. It had still not been completed in one CWAC in
Serenje when this report was being drafted. 2Because enumeration was interrupted by elections, QA activities were intermittent there. 3Data entry had not yet started for Serenje at the time this report was being drafted because the CWACs enumerated
in December 2014 were part of the control group for the impact evaluation of the SCT program, not Round II of
enumeration for 2014.
65
Costs
The cost model considered the investment and operational cost of scale-up with both the paper and the
m-tech systems, assuming that 30 districts are added to the program in the first year, and the remaining
23 are added in the second year. After the scale-up, the model assumes that one-third of districts are
enumerated again each year. The model indicates that cost differences should not be a determining factor
when deciding between m-tech and paper systems.
The fixed cost of investing in scale-up with m-tech was estimated at ZMW 1,855,262 (Table 23); there are
no investment costs of paper scale up. While this is a significant investment, it is small relative to the
overall cost of the 2016-2017 scale up, which we estimate will come to approximately 20 million ZMW
(see Annex G), with either the paper or m-tech systems.
Table 23. Fixed Costs of M-Tech Scale-Up
Item Unit Cost (ZMW)
Units Total Cost(ZMW)
Technical Equipment
Tablets 1,417 622 81,546
Covers 114 622 70,611
Screen Protectors 66 622 40,827
External battery packs 100 275 27,500
Internet modem 420 103 43,260
Locked Cabinet 2,500 10 25,000
Power Strips 40 309 12,360
Air freight & shipping 87 622 54,157
Technical Services
Software Provider 600,000 1 600,000 Training for HQ and provincial staff 100,000 1 100,000 Total Total 1,855,262
Additionally, the cost model indicates that the running costs of m-tech are slightly lower than the running
costs of paper. The per-district difference in variable costs between m-tech and paper is ZMW 26,832
(Table 24). Major variable cost elements, such as DSA for enumerators, supervisors, and drivers; fuel
expenses; and compensation for CWAC members are identical between m-tech and paper processes.
66
Table 24. Differences in Variable Costs of 2016 – 2017 SCT Scale-Up per District
Item Cost (ZMW) Item Cost (ZMW)
Paper M-tech
Provincial staff costs 0 Provincial staff time 4,000
District Staff (acting as supervisors) costs 18,900 District Staff (acting as supervisors) costs
7,875
Printing SCT Form 1 19,777 Enumerator training 77,688
Enumerator training 58,266 Internet Bundle (District Office) 1,200
Quality Assurance 7,000 Internet Bundle (enumerators) 6,000
Pencils 20 Tech maintenance 1,069
Clear-bags 220
Forms transportation to LSK 4,000
Data Entry 16,481
Total 124,664 Total 97,832
Notes: No provincial staff time used with paper enumeration.
Limitations of the Process Evaluation
As with the impact evaluation, one limitation of the process evaluation was tied to the study setting.
Serenje district was chosen for the process evaluation, because the timing of enumeration there allowed
the evaluation team to interview enumerators immediately after the conclusion of enumeration.
However, it was discovered that the delays in enumeration in Serenje were extensive, and were tied to
the fact that there was not a complete round of targeting there. This made it difficult to directly compare
the Serenje process to the Mungwi process.
Data collection activities were designed to minimize the risk of the evaluation affecting the enumeration
process in Mungwi and Serenje. However, in Shiwa N’gandu, where the impact evaluation was taking
place, it is likely that the evaluation team’s presence and evaluation activities had greater influence on
the enumeration process. The requirement that enumerators finish all households in one method before
moving onto the second method may have encouraged enumerators to be more thorough. This could
have led to fewer exclusion errors in Shiwa N’gandu than might have otherwise been experienced there.
Additionally, lists of households to be enumerated with paper provided by the evaluation team only
included households that qualified for enumeration, rather than listing all households and their
dependency ratios, as was done in other districts. This may have made it easier for enumerators in Shiwa
N’gandu to enumerate the correct households, resulting in fewer inclusion errors there than might have
been experienced under status quo conditions.
An additional limitation was the limited availability of cost data to construct projections of the costs of
scale-up with paper and m-tech. The evaluation team was not able to obtain all of the costing information
requested from various stakeholders and had to rely on estimates and assumptions for certain inputs.
67
Conclusions of Process Evaluation
The process evaluation findings indicated that while use of m-tech eliminates some challenges of the
paper enumeration system, it can also introduce new challenges which should be addressed before scale-
up.
Both systems were vulnerable to errors and delays. Some of the vulnerabilities of the m-tech system are
unique to the pilot and can be easily addressed with one-time fixes. These include the errors and
inconsistencies caused by the separation of the household characteristics and household members
sections of the forms and the long delay in uploading data to the MIS. Other vulnerabilities of the m-tech
system are likely to remain challenges in a steady state, though they can be mitigated with technical and
logistical solutions. These include exclusion errors at the CWAC Form 2 stage, incorrect translation of
survey questions, errors caused by enumerators typing incorrect values into the tablet, and poor network
and power supply, which cause tablets to run slowly and delay enumeration.
The main challenges that enumerators experienced that were unique to the m-tech system were related
to design flaws in the application, lack of network, and ineffective charging mechanisms. Without the
ability to consistently sync the tablets to a server, data remained on the tablets, resulting in long loading
times. Additionally, the GPS function proved particularly challenging, since the form could not be
considered “complete” without a GPS waypoint. While some of these challenges are related to structural
issues with Zambian infrastructure and are outside of implementers’ control, steps can be taken to
mitigate their effects. These are outlined in Figure 9.
The lack of integration of the CommCare database with the MIS system, coupled with poor definition of
roles and responsibilities and communication challenges between stakeholders has resulted in significant
delays and has undermined a system that was viewed as a solution to get beneficiaries enrolled in the SCT
program more quickly. Aligning the two databases so that information can be seamlessly uploaded from
the m-tech database to the MIS will eliminate delays and the risk of error that comes with manual
manipulation of data.
Finally, the start-up costs of m-tech enumeration are small relative to the overall costs of the program
and could be recouped within two years. It is unlikely that cost will be a limiting factor in the decision to
scale up.
69
STAKEHOLDER ANALYSIS
Analysis Questions
The primary questions for this activity were:
What role does each stakeholder play in the SCT program, in general, and in the enumeration
process, specifically?
How might each stakeholder’s role and responsibilities evolve if m-tech enumeration is
implemented at-scale?
What perceptions do various stakeholders have of m-tech enumeration?
Methods
The stakeholder analysis took place in all three evaluation districts as well as with relevant stakeholders
in Lusaka.
Guided interviews were conducted with district officials, provincial officials, national MCDMCH officials,
representatives of CPs, and representatives of private contractors involved in the SCT.55
55 All interviews with government officials were conducted in private, and some interviews with CPs were conducted in private, while others were conducted in groups of two or three team members from the same organization.
KEY FINDINGS
1. Stakeholders throughout the government and cooperating partners are enthusiastic about the SCT
program and transitioning to the m-tech system.
2. Government stakeholders may underestimate the effort and changes required to effectively
transition to an m-tech system.
3. Role delineation between the government and cooperating partners can become unclear, causing
delays and weakening MCDMCH ownership
4. Officials across government and cooperating partners have differing priorities for the future uses
of m-tech.
70
All Social Welfare staff at the district and provincial offices where the evaluation team was working, and
at HQ were invited to participate. The evaluation team was able to meet with all nearly all members of
the department at each level.56
Representatives from MCDMCH, WFP, and UNICEF identified key informants at each of the CPs and private
contractors involved in the SCT enumeration process. Representatives from American Institute for
Research (AIR), CSO, United Kingdom Department for International Development (DFID), Dimagi, the
Embassy of Finland, the International Labor Organization (ILO), Irish Aid, Platform for Social Protection,
UNICEF, and WFP were invited to participate in interviews. The evaluation team was able to meet with
representatives from all of the targeted institutions with the exception of the representative from the
Platform for Social Protection, who was unavailable due to scheduling conflicts.
Stakeholder context
Two primary stakeholder groups engage with the SCT program (Table 25):
1. Government: The national Social Welfare Department of MCDMCH implements the program.
Day-to-day management is provided by provincial and district Social Welfare Officers.
2. Cooperating Partners: A diverse group of CPs provide financial, technical and policy support for
the Social Welfare Department. CPs also enlist outside contractors to directly assist MCDMCH in
technical aspects of program execution.
Table 25. SCT Program Stakeholders
Government Cooperating Partners
MCDMCH Social Welfare
Department at HQ
Provincial Social Welfare
Offices
District Social Welfare Offices
Financial: DFID, Embassy of
Finland, Irish Aid
Technical: WFP, UNICEF
Policy: ILO, Platform for
Social Protection
MCDMCH
The SCT program is administered by MCDMCH officials in the Social Welfare Department at the national,
provincial and district levels. The organizational structure of the department is displayed in Error!
eference source not found., and individual roles are discussed below.
56 The evaluation team was not able to conduct an interview with the SSWO for Central Province, as she left her post in the middle of the study period.
71
Figure 16. Organizational Structure of SCT Unit
MCDMCH HQ
Director of the Social Welfare Department. Provides policy and strategic guidance to staff
members on planning, implementation, and monitoring.
Chief Social Welfare Officer for Non-Statuary Services. Provides monitoring and oversight on
administrative and programmatic issues.
Principal – Social Cash Transfer. Manages the SCT Unit, with key responsibilities including
management of day-to-day operations, design of policies, provision of oversight of disbursement
of funds to districts, and organization of capacity building at HQ, provincial, and district level.
Senior Social Welfare Officer – Management Information System. Oversees all aspects of data
collection and management in the SCT program.
Training CWACs and enumerators.
Monitoring data entry.
Cleaning of the data and running of the PMT to determine eligibility.
Updating of MIS based on finalized community validation lists.
Generating of reports and statistics from the MIS to assist with monitoring and evaluation.
Senior Social Welfare Officer – Training. Organizes trainings at national, district, and constituency
level.
72
Senior Social Welfare Officer – Disability. Ensures that people with disabilities have information
on the SCT program, and facilitates engagement with civil society organizations for people with
disabilities.
Senior Social Welfare Officer - Management & Implementation and Social Welfare Officer –
Management & Implementation. Oversees finances, budget formulation, and procurement for
the SCT unit.
Senior Social Welfare Officer - Policy Monitoring and Evaluation and Social Welfare Officer -
Policy Monitoring and Evaluation. Assists with designing policy, monitoring the program to
ensure that implementation is in accordance with policy, and overseeing contractors hired to
provide evaluation services.
Provincial Offices
Provincial Social Welfare Officer (PSWO) and the supporting SSWOs. Provides oversight, quality
control, and policy guidance for activities in the districts.
Most provincial offices are staffed with two SSWOs, one of whom has responsibility for
non-statutory services like the SCT.
The province will need someone full time to help with the management of the application
that will be used for the m-tech.
District Offices
DSWO, Assistant Social Welfare Officer (ASWO) and Assistant Program Officer (APO).57 Execute
the SCT program in their districts. The district staff work collaboratively to execute the key tasks
of the program.
Mobilizing and training CWACs.
Leading enumeration.
Conducting community validation of households that have qualified for the program.
Organizing account for payments to beneficiaries.
57 DSWOs and Assistant Social Welfare Officers are members of the government, while APOs are hired on annual contracts. In our observations, all three work collaboratively to administer the SCT program.
73
Cooperating Partners
The SCT program benefits from an engaged group of CPs who provide three types of assistance:
1. Financial assistance
2. Technical assistance
3. Policy guidance
The ecosystem of partners and contractors who support the Social Welfare department is summarized
below and illustrated in Figure 17. More details on the current role and likely future roles of each partner
are discussed in Annex H.
Financial Assistance
DFID, the Embassy of Finland, and Irish Aid provide financial support to the SCT program. All three
send some funds directly to MCDMCH, but most funds go to UNICEF earmarked for technical
assistance for the SCT program.
DFID, as the lead donor, also provides direct technical assistance and policy guidance.
Irish Aid supports policy and civil society by funding the Platform for Social Protection.
Technical Assistance
UNICEF coordinates / provides most of the technical support for the SCT program and also
provides policy guidance. The Social Protection staff at UNICEF engage directly with MCDMCH on
a regular basis and have procured the support of four contractors:
Palm Associates conducts training and QA activities.
CSO enters data from paper enumeration.
Oxford Policy Management provides policy support for the design of the targeting
mechanism and training on the targeting mechanism and enumeration.
The American Institute for Research and Futures Group designed the MIS and provides
ongoing assistance with operations.
WFP assists with the procurement process for an electronic payment system and was also
engaged by UNICEF to provide other assistance including execution of the m-tech pilot.
WFP contracted Dimagi, a software developer, to design the m-tech application.
Policy Assistance
The Platform for Social Protection58 is a Zambian civil society organization that advocates for social
protection issues in Zambia.
The International Labor Organization provides demand driven support to government on a variety
of social protection issues in Zambia.
58 The evaluation team was not able to meet with representatives of the Platform for Social Protection.
74
Figure 17. Map of SCT Cooperating Partners
Key Findings
Finding 1: Positive stakeholder feedback on SCT and transition to m-tech
Stakeholder interviews with all two groups revealed widespread enthusiasm for the SCT program and
scale-up of m-tech enumeration.
MCDMCH
In-depth interviews with MCDMCH officials revealed widespread support for scale-up with an m-tech
enumeration system. Of the nine officers interviewed at HQ, all expressed their belief that the m-tech
system should be used in the scale-up. The advantages noted for m-tech centered on efficiency,
information management, and improvements in accuracy. One officer noted that adopting m-tech would
“improve our efficiency at every level.” Another officer commented “It’s a good idea to scale with m-tech,
because it provides for easy transmission of information.”
Embassy of Finland
Irish Aid DFID
Platform for Social
Protection
ILO
POLICY
FINANCIAL TECHNICAL
UNICEF
CSO (gov’t)
Palm Associates
Oxford Policy Mgmt
World Food Programme
Dimagi
American Institute for
Research
Futures Group
Partners in support of
MCDMCH Social Welfare Department SCT Program
Italics indicate contractor Arrows indicate fund flows
75
The evaluation team spoke with 14 district and provincial staff and also found the vast majority of them
to be supportive of the idea of a scale-up with m-tech. One district officer noted that the m-tech pilot
“[had] added value because people are saying social welfare is advanced.” Another district official noted
advantages of the m-tech enumeration in speed and accuracy. The m-tech was “very fast” and was “able
to edit there and then, [since] you can’t proceed minus a question.” Some district officials who had
participated in the pilot were warier of potential challenges with m-tech, such as lack of airtime, poor
network, and issues with the programming of the application, but most concluded that if these could be
addressed, they would support a scale-up.
Cooperating Partners
Finally, all CPs interviewed noted that the decision to scale up with an m-tech pilot rested with the
government, and that the partners would follow government’s decision. At the same time, all of the
partners expressed their interest in the m-tech pilot and their eagerness to support the government in
scaling up with m-tech, if they make that decision.
Finding 2: Government may underestimate the effort and changes required
to transition to m-tech
Many government officials underestimate the changes that will need to take place in order to establish
and operate the new system.
Of the 23 MCDMCH officials and staff interviewed, the majority seemed to underestimate the impact that
scale-up with m-tech would have on their roles and activities. When asked, many cited efficiency gains as
the main effect on their role. “[It] won’t change the way we are doing business, apart from making us
more efficient,” noted one officer. Others felt that there would not be a change in their activities because
“much of the work is changing in terms of mode, in terms of how [you] collect this information.”
Officials in certain positions did feel that a scale-up with m-tech would impact them. One official noted
that scaling up with m-tech would create “a lot more work” because of the nature of that official’s
position. Others foresaw positive effects, noting that the m-tech pilot had already allowed them to
develop new skills.
While some officials may not see their roles affected by the transition, others will find that their work
portfolio changes significantly. Social Welfare Department officials can be divided into three categories
based on how a switch to m-tech enumeration would affect their roles:
1. Significant role change: HQ officials responsible for data management, training, finances, and
procurement would be responsible for integrating an m-tech enumeration system into their
current work portfolio. Provincial and district staff would be responsible for implementing the
new m-tech system.
76
2. Addition of one or two new tasks: These officials include HQ staff who oversee program
operations and monitoring and evaluation. They would need to acquire access to new data in real
time and would need to acquire the skills to independently access and analyze these data.
3. Minimally affected: These officials are all within the Social Welfare Department leadership. In the
event of an m-tech scale-up, they would likely benefit from better access to data, but would not
see their activities affected.
Figure 18 revisits the organizational structure of the SCT program, highlighting these three groups. Specific
changes to each individual’s roles, activities, and skills are detailed in Table 26
Figure 18. Effect of Transition on Roles and Responsibilities within the SCT Unit
77
Table 26. Changes to Roles and Responsibilities in Social Welfare Department
SIGNIFICANTLY AFFECTED
Titles Current role & responsibilities New activities or responsibilities
with m-tech
New skills or staff
required for m-tech
transition
Sr. Social Welfare
Officer for the
Management
Information
System
Monitor data entry.
Clean data and run PMT.
Update MIS based on community validation lists.
Generate statistics for M&E.
Only MCDMCH official conversant in SQL and able to make modifications to the MIS.
Serve as focal point for MCDMCH on m-tech enumeration.
Liaise with software provider on application design.
Ensure that the data collected with mobile devices feed smoothly into the MIS.
Assist with trainings.
A deputy to provide assistance.
Sr. Social Welfare
Officer & Social
Welfare Officer
of Management
&
Implementation
Responsible for finances, budget formulation, and procurement for the SCT unit.
Assist or liaise with CPs to procure hardware.
Develop and implement a system to manage equipment.
Develop and manage procedures in case of damage, loss, or theft of tablets.
None
Sr. Social Welfare
Officer of
Training
Organize trainings at national, district, and constituency level.
Organize trainings for HQ staff, provincial officials, district staff, and enumerators.
Proficiency in all aspects of new m-tech system.
Provincial Social
Welfare Officer
and Sr. Social
Welfare Officer
Provide oversight, quality control, and policy guidance for activities in the districts.
Monitor data collection activities and data quality.
Manage supply of tablets, housed at province and shared among districts.
Participate in and lead trainings.
One SSWO per province.
Proficiency in the application, basic troubleshooting, and navigating database.
District Social
Welfare Officer,
Asst. Social
Welfare Officer,
Asst. Program
Officer
Execute all SCT program components including CWAC mobilization and training, enumeration, community validation, and payments.
Supervise enumeration with m-tech and assist with troubleshooting.
Focus on logistical management of enumeration.
Participate in and lead trainings.
Proficiency in the application, basic troubleshooting, and navigating the database.
78
Table 26 (continued): Changes to Roles and Responsibilities in Social Welfare Department B. ONE OR TWO NEW TASKS
Titles Current role & responsibilities
New activities or
responsibilities with
m-tech
New skills or staff
required for m-tech
transition
Principal – Social Cash
Transfer
Manage day to day operations.
Design policies.
Provide oversight of disbursement of funds to districts.
Organize capacity building.
Experience increased organization in staff capacity building activities.
Oversee a new system.
Full proficiency in mobile data collection and database.
Sr. Social Welfare
Officer & Social Welfare
Officer Policy
Monitoring and
Evaluation
Assist with policy design and monitoring the program.
Oversee contractors hired to provide evaluation services.
Handle expanded monitoring activities with new data.
Proficiency in mobile database.
Senior Social Welfare
Officer- Policy for
Disability
Ensure persons with disabilities are accessing the SCT program.
Handle expanded monitoring activities with new data.
Proficiency in mobile database.
MINIMALLY AFFECTED
Titles Current role & responsibilities
New activities or
responsibilities with
m-tech
New skills or staff
required for m-tech
transition
Director of the Social
Welfare Department
Provide policy and strategic guidance on planning, implementation, and monitoring.
None None
Chief Social Welfare
Officer for Non-
Statutory Services
Provide monitoring and oversight on administrative and program issues.
None None
79
Finding 3: Role delineation between the government and cooperating
partners can sometimes become unclear, causing delays and weakening
MCDMCH ownership
The high number of partners and contractors increases the risk that key responsibilities and tasks fall
through the cracks. Additionally, continued vigilance is needed to ensure that MCDMCH retains full
ownership of all systems and processes.
Clear Definition of Roles and Responsibilities
Many stakeholders noted the high level of coordination between CPs and government on the SCT, citing
weekly meetings that took place throughout the 2014 SCT scale-up. As one CP official noted, “The nature
of the relationship we have with the Ministry and the other partners is a very open one.” Partners
reported that this coordination facilitated each partner’s contribution in areas where they have a
comparative advantage, and that the diversity among the cooperating partners allows for a lot of flexibility
in responding to MCDMCH’s needs.
However, with so many partners and contractors involved in assisting MCDMCH in different aspects of
the SCT, accountability can sometimes be a challenge. As one stakeholder put it, “We should have clear
roles and responsibilities for who should do what.” The failure to clearly define roles and responsibilities
can sometimes cause challenges for accountability and allow critical components to fall between the gaps
of partner and MCDMCH activities. One example of this occurred during the m-tech pilot, when
responsibility for post-enumeration data processing was not clearly defined. The result was an
unnecessary delay in inputting m-tech data into the MIS and running the PMT.
Continued emphasis on MCDMCH Ownership
All CPs expressed their desire to focus their support on areas identified as government priorities and to
follow the government’s lead in all decisions. They also emphasized the importance of government
ownership of systems. However, with such deep engagement from cooperating partners and contractors,
it can be difficult to ensure that the government is fully owning and operating the systems that are put in
place. The operation of the MIS is one example of this problem. Nearly four years after the MIS was first
commissioned, it is still primarily managed by an in-house consultant and one full-time MCDMCH officer.
An m-tech data collection system would be vulnerable to a similar outcome, so MCDMCH and CPs should
be vigilant. Contractors should engage officials throughout MCDMCH at an early stage to ensure full
MCDMCH ownership and operation of the system. Even if software developers are procured by CPs, they
should work directly with MCDMCH to design applications and should be accountable to MCDMCH for
final products.
80
Finding 4: Officials across levels of government and cooperating partners
have differing priorities for the future uses of m -tech
As mentioned earlier, all of the stakeholders interviewed expressed their support of scaling up with an m-
tech enumeration system. However, when asked about their priorities for other uses of mobile
technology in the SCT program, stakeholders voiced a wide range of perspectives.
Cooperating partner priorities for m-tech:
Development of a mobile solution for the SCT grievance mechanism59
Development of a mobile solution for continuous registration of beneficiaries in the SCT
Linkages of social protection to other programs
MCDMCH HQ priorities for m-tech:
Easier monitoring of activities in the field
Communication with districts
More timely and complete data for use in monitoring and evaluation activities
Development of an electronic payment system
Decentralization of MIS
Provincial priorities for m-tech:
Access to data about beneficiaries and potential beneficiaries in the districts they manage
Opportunity to provide oversight for data collection activities
Decentralization of the MIS
While mobile technology is well suited to address all of these issues, developing and rolling out a program
that addresses all of these at the same time would be impractical and potentially counter-productive.
Attempting to roll out multiple functions within one application at the same time could overwhelm new
users, and it could be challenging for a contracted software developer to determine which demands to
focus on when building software systems. MCDMCH and the CPs should align their priorities on these
questions before procuring additional software development services.
59 The grievance mechanism is the channel for potential beneficiaries, current beneficiaries, or community members in general to voice concern about inclusion and exclusion errors in the program, problems with transfer payments, and other issues.
81
Discussion & Conclusions from Stakeholder Analysis
The findings of our stakeholder analysis indicate that, while there is widespread support for the SCT
program’s transition to an m-tech enumeration system, many in government underestimate the
complexity of the transition and the degree to which it will affect roles and responsibilities within the
department. The Social Welfare Department staff can rely on the support of a coordinated group of CPs
to assist in the transition, but continued vigilance is needed to ensure that roles and responsibilities are
clearly defined, government is in full ownership of all programs, and government and CPs align priorities.
82
DISCUSSION AND RECOMMENDATIONS
In the pilot, the m-tech system did not materially outperform the incumbent paper-based system. M-tech
enumeration successfully reduced the number of omitted questions. However, there was no statistically
significant difference detected between recording error rates between paper and m-tech, when recording
errors are defined as inconsistencies between information captured on audio recordings of interviews and
information noted on paper forms or in the m-tech data; more time was required to generate final
eligibility lists for households enumerated with m-tech than for households enumerated with paper; and
no differences were found in the rates of inclusion and exclusion errors at the CWAC Form 2 stage. While
m-tech enumeration is slightly less expensive, the cost savings are small relative to overall enumeration
expenses. Though the m-tech system failed to outperform the paper system in the pilot, we believe that
when functioning well, an m-tech system would be faster, and more accurate than the existing paper
system.
Three critical challenges hindered the performance of the m-tech system in the pilot: 1) design flaws in
the application, 2) logistical challenges with network and power, and 3) a poor interface between the m-
tech database and MCDMCH’s existing MIS. If these critical challenges can be addressed, we believe that
the m-tech system would be able to demonstrate its superiority to the incumbent paper system.
In addition to the critical challenges, which need to be addressed in order for m-tech to realize its
potential, further actions are needed to prepare for a scale-up with m-tech and to lay the foundations of
a robust data collection system.
Fortunately, these challenges are surmountable, and we believe that MCDMCH and CPs have the capacity
to address the three critical challenges identified by the evaluation. Experience with Rounds 1 and 2 of
paper enumeration in 2014 has shown that MCDMCH and CPs are able to learn, adjust, and make
significant improvements between the first and second executions of an activity. As a result, Round 2 of
paper enumeration had fewer errors, ran more smoothly, and took much less time, than Round 1.
Three recommendations, listed below, outline the steps for addressing these challenges. MCDMCH and
CPs also have the capacity to take the additional actions necessary to scale the m-tech system. We outline
below five additional recommendations, which will facilitate the implementation of a robust, scaled m-
tech system. All eight of these action areas are discussed in detail in the Implementation Guide.
Critical Actions
1. Application design: Addressing the issues with the design and programming of the application
will facilitate more consistent and accurate data collection. The separation of the household
roster and household characteristics sections of the form in the m-tech application was the root
cause of many of the errors and inconsistencies with the m-tech data. Reprogramming the
application so they are embedded in the same form, and addressing several other small
programming issues, can eliminate many of these errors.
83
2. Network and electricity safeguards: Providing district offices and enumerators with equipment
to connect reliably to the internet on fully charged devices will optimize the benefits of m-tech.
While the root causes of the network and power challenges faced in this program are systemic
problems with infrastructure in Zambia, implementers can take steps to mitigate these, including
providing hot spots for district offices and using external batteries, which are more reliable and
conducive to enumerators’ work flow, than solar chargers.
3. Data management: Aligning the m-tech system with the Management Information System (MIS)
and enabling remote access to the data will streamline data analysis and use. The poor interface
between the CommCare database and the MIS was a cause of major delays in the pilot, and
exposed the data to unnecessary error risk, as they had to be manipulated by hand before being
uploaded. The m-tech system and MIS should be aligned so that data can seamlessly move from
one system to another without the need for manual manipulation or uploading.
Steps to Scale
4. Roles and responsibilities: Clear roles and responsibilities will facilitate manageable workloads,
accountability, and effective implementation. The SCT program is fortunate to have the support
of a coordinated group of CPs and subcontractors, but with so many organizations involved,
continued vigilance is needed to ensure that roles and responsibilities are clearly divided.
5. Training: Training modules with m-tech components targeted at each level of the SCT personnel
will be necessary for smooth implementation. Under the m-tech system training in enumeration
and use of the technology can be combined to improve efficiency.
6. Equipment management: Equipment procurement and maintenance protocols will enhance
accountability and reduce risk of leakage. Procedures for tracking equipment and dealing with
adverse events will be necessary in a system where so much equipment is being handled by staff
across multiple provinces.
7. Scheduling and duration of enumeration: Carefully planned enumeration schedules will
encourage enumerators to complete their caseloads quickly and accurately. Standardizing
scheduling procedures across provinces and districts will ensure that the system is fair and
consistent.
8. Supervision and quality assurance: Using m-tech to strengthen supervision and QA mechanisms
for managing caseloads, ensuring correct interview practices, and reviewing data quality will
improve accuracy and fairness of the SCT program. Effective monitoring and QA systems can also
help reinforce the discrete roles and responsibilities at each level of the system.
Overall, the promise of m-tech remains, but realizing this promise will not occur automatically. Significant adjustments to the system are required to successfully implement and scale the m-tech system.
84
REFERENCES
American Institutes for Research. 2013. Zambia’s Child Grant Program: 24-Month Impact Report. Washington, DC.
Caeyers, Bet, Neil Chalmers, De Weerdt, and Joachim. 2010. A Comparison of CAPI and PAPI Through a Randomized Field Experiment. SSRN Scholarly Paper ID 1756224. Rochester, NY: Social Science Research Network. http://papers.ssrn.com/abstract=1756224.
Dale, Oystein, and Kaare Birger Hagen. 2007. “Despite Technical Problems Personal Digital Assistants Outperform Pen and Paper When Collecting Patient Diary Data.” Journal of Clinical Epidemiology 60 (1): 8–17. doi:10.1016/j.jclinepi.2006.04.005.
Government of Liberia, Ministry of Gender and Development, European Union, and UNICEF. 2012. Transformative Transfers: Evidence from Liberia’s Social Cash Transfer Programme. http://www.unicef.org/liberia/Transformative_Transfers_LiberiaCashTransferProgramme.pdf.
Miller, Candace, Maxton Tsoka, and Kathryn Reichert. 2008. Impact Evaluation Report: External Evaluation of the Mchinji Social Cash Transfer Pilot. Center for International Health and Development, Boston University. http://www.unicef.org/malawi/MLW_resources_impactsocialcashtrf.pdf.
Njuguna, Henry N, Deborah L Caselton, Geoffrey O Arunga, Gideon O Emukule, Dennis K Kinyanjui, Rosalia M Kalani, Carl Kinkade, Phillip M Muthoka, Mark A Katz, and Joshua A Mott. 2014. “A Comparison of Smartphones to Paper-Based Questionnaires for Routine Influenza Sentinel Surveillance, Kenya, 2011–2012.” BMC Medical Informatics and Decision Making 14 (1).
Samson, Michael. 2009. “Social Cash Transfers and Pro-Poor Growth.” In Promoting Pro-Poor Growth: Social Protection. Organisation for Economic Co-operation and Development.
Seebregts, Christopher J., Merrick Zwarenstein, Catherine Mathews, Lara Fairall, Alan J. Flisher, Clive Seebregts, Wanjiru Mukoma, and Knut-Inge Klepp. 2009. “Handheld Computers for Survey and Trial Data Collection in Resource-Poor Settings: Development and Evaluation of PDACT, a PalmTM Pilot Interviewing System.” International Journal of Medical Informatics 78 (11): 721–31.
Thriemer, Kamala, Benedikt Ley, Shaali M. Ame, Mahesh K. Puri, Ramadhan Hashim, Na Yoon Chang, Luluwa A. Salim, et al. 2012. “Replacing Paper Data Collection Forms with Electronic Data Entry in the Field: Findings from a Study of Community-Acquired Bloodstream Infections in Pemba, Zanzibar.” BMC Research Notes 5: 113.
World Bank. 2014. “World Development Indicators - Zambia.” June 28. http://data.worldbank.org/country/zambia.
85
ANNEX A. POWER CALCULATIONS
The impact evaluation was designed to detect an average difference of four errors per survey between
paper and m-tech surveys.
To estimate the sample size needed to achieve the desired minimum detectable effect, the evaluation
team simulated a cluster randomized controlled trial design in which the enumerators are the clusters
and each household is an observation with one of two treatments – paper or m-tech enumeration. While
this analysis strategy is not the ideal approach given the study design of a matched pair approach, it is a
useful heuristic for estimating power, as it accounts for intra-cluster correlation (ICC) – the correlation
between an enumerator’s own error rates – and allows for adjustment of both the number of
enumerators and the number of households per enumerator.
The parameters for a priori the simulation were:
Mean number of errors per electronic survey: 1 error
Standard deviation of an enumerator’s error rate: 4 errors (for paper and m-tech)
Minimum detectable effect size: a difference of 4 errors
90% power; 5% significance; 0.8 ICC
With these a priori assumptions, the evaluation team projected that a sample size of 20 enumerators,
each with 20 paper and 20 m-tech households would be needed to achieve a minimum detectable effect
of no more than 4 (Table A-1). (The actual minimum detectable effect projected with these parameters
was 3.7). The evaluation team decided to double the sample of households to 40 per enumerator per
method, since the parameters used for the projection were based on the evaluation team’s estimates,
and including additional observations did not increase the burden or risks for study participants.
Table A-1. A Priori Power Projection Calculation
Parameter Assumptions
Minimum detectable effect 3.7
Enumerators 20
Households (per enumerator per method) 20
Standard deviation paper & m-tech (estimated) 4
Note: 90% power, 5% significance, 0.8 ICC.
Sample Size Challenges
The sample size attained for this project was considerably smaller than originally planned. Nine of the
fourteen enumerators with recordings for both methods had fewer than 10 recordings per enumeration
method; because of this, the evaluation team revised the sampling plan and chose as many recordings as
86
possible for enumerators with fewer than 10 recordings and a sample of 10 recordings for enumerators
with more than 10 recordings.
There were two primary reasons for the discrepancy between the sample size called for in the study design
and the sample size achieved in the project.
In advance of the study, the evaluation team was told to expect approximately 3,200
households to be enumerated in Shiwa N’gandu, but when field work began it was discovered
that the number of households requiring enumeration in Shiwa N’gandu was actually about one-
third the size of that projection, with the total caseload amounting to just 1,056 households. On
average, this comes to about 55 households total per enumerator.
In addition to the small caseload in Shiwa N’gandu, the evaluation team encountered
unexpected challenges with the recordings. Many enumerators found that potential beneficiary
households refused to have the interview recorded, and other enumerators neglected to turn
the recorders on before beginning their interviews.
Back Power Calculations
The standard deviation of total errors observed in the study sample was 5.51 for paper and 2.37 for m-
tech. Table A-2 shows the minimum detectable effect that could be achieved with our originally proscribed
sample and the sample we were able to achieve given this standard deviation.
Table A-2. Power Calculation with Observed Standard Deviation
Parameter Intended
Sample Size
Actual Sample
Size
Minimum detectable effect 5.2 5.39
Enumerators 20 19
Households (per enumerator per method) 40 10
Observed standard deviation paper 5.51 5.51
Observed standard deviation m-tech 3.67 3.67
Note: 90% power, 5% significance, .8 ICC.
87
ANNEX B. ENUMERATOR CHARACTERISTICS
Table B-1 compares the Shiwa N’gandu enumerators who began with paper to those who began with m-
tech on a variety of observable characteristics. This balance check reveals no statistically significant
differences between the two groups.
Table B-1. Balance Check of Shiwa N’gandu Enumerators by Order of Enumeration Method
Characteristics
Paper First M-Tech First
N = 9 N = 10
n (%) n (%) P-value
Gender
Male 6 66.67 5 50 0.463
Female 3 33.33 5 50
Age
20 - 29 yrs 2 22.22 4 40 0.454
30 - 39 yrs 4 44.44 4 40
40 - 49 yrs 3 33.33 1 10
50 - 59 yrs 0 0 1 10
Highest Education
Completed primary school 1 11.11 0 0 0.399
Some post-secondary (not university) 1 11.11 1 10
Completed post-secondary (not university) 7 77.78 7 70
Completed university 0 0 2 20
Occupation
Community development / Social welfare official 0 0 4 40 0.108
Teacher 5 55.56 4 40
Other government employee 2 22.22 2 20
Other non-government employee 2 22.22 0 0
Prior Use of S-Phone or Tab
No 5 55.56 4 40 0.498
Yes 4 44.44 6 60
Prior data collection experience
No prior experience 3 33.33 2 20 0.509
Prior paper experience 3 33.33 6 60
Prior digital experience 3 33.33 2 20
Note: P-values were calculated using Pearson's chi-squared test
The impact evaluation included 19 enumerators in Shiwa N’gandu district, while the m-tech and paper
process evaluations in Mungwi and Serenje districts included 21 and 7 enumerators, respectively
Enumerators in Shiwa N’gandu and Mungwi were primarily male and under the age of 40, while the
enumerators in Shiwa N’gandu were primarily female and more diverse in age. Across the districts,
enumerators were educated, with 70% of those included having completed post-secondary school (not
88
university). All seven enumerators in Serenje were community development / social welfare officials,
while enumerators in Shiwa N’gandu and Mungwi had other government and non-government jobs.
Around 57% of the enumerators had prior experience with a smart phone. None of these differences were
statistically significantly different across districts.
89
ANNEX C. ADDITIONAL DETAILS ON MATCHING ACTIVITY AND
ADDITIONAL INCLUSION ERROR ANALYSIS FOR SERENJE
Additional Details on Matching Activity
Table C-1 summarizes the matching activity. The first step of the matching activity was to match
households on the original paper CWAC Form 2 to households in the recorded CWAC Form 2 data (either
in CommCare or Excel). Since data were entered directly from original paper CWAC Form 2 into Excel or
CommCare, these records should have matched perfectly. However, there were many cases of records in
the original CWAC Form 2 that could not be matched to the recorded CWAC Form 2, and of households
in recorded CWAC Form 2 that could not be matched households on the original CWAC Form 2.
In Mungwi and Shiwa N’gandu, 202 households (2.8%) on original CWAC Form 2 had no match in
CommCare CWAC Form 2, and 360 households (5%) in CommCare CWAC Form 2 had no match in original
CWAC Form 2. These households were not included in the exclusion and inclusion analysis, since there
was no way to determine if they were left out of the CommCare database, or if their information was mis-
entered, making matching impossible. If these mis-matches were only due to data entry error alone, we
would expect to find roughly the same numbers of unmatched households in each database, but we find
more unmatched records in CommCare than on the original paper forms. One possible explanation for
this finding is that some enumerators were starting a new Form 2 record after making a mistake, rather
than correcting the old record, which would result in the extra Form 2 records observed.
In Serenje, a total of 384 (18%) households on original Form 2 could not be matched to Excel records or
Form 1s, and 118 (6%) could be found on Original Form 2s and on Form 1, but could not be found on the
Excel Form 2. As in Mungwi and Shiwa N’gandu, these records were not included in the inclusion and
exclusion analysis. Excel versions of CWAC Form 2 were not available for three CWACs, which accounts
for many of the unmatched records in original Form 2. Without Excel spreadsheets for these CWACS, it is
not possible to subject these records to the same inclusion test used for Mungwi and Shiwa N’gandu.
However, we report on their presumed inclusion status by comparing the original CWAC Form 2 directly
to SCT Form 1s in Annex D.
The final step of the matching process was to pair CWAC Form 2 records to SCT Form 1 records, to
determine if households had been enumerated. Every household with a Form 1 record should be found
on Form 2, but there were several instances when this was not the case. In Shiwa N’gandu, 19 households
(1.1%) that could be found in CommCare, but not on original CWAC Form 2s were found in the MIS, and
32 households (1.9%) appeared in the MIS but could not be found in the original or CommCare versions
of CWAC Form 2. In Serenje, there were 18 households (1%) which were found in SCT Form 1 photos but
could not be located in the original CWAC Form 2s or Excel CWAC Form 2s. These records were all excluded
from the analysis.
90
Table C-1. Summary of Matching
Mungwi Shiwa
N’gandu Serenje Total
N % N % N % N %
Mungwi and Shiwa N’gandu
In Original Form 2 and CommCare Form 2 5,158 93.6 1,017 60.4 - - 6175 68.3
In Original Form 2, CommCare, and MIS - - 409 24.3 - - 409 4.5
Only in Original Form 2 103 2 99 5.9 - - 202 2.2
CommCare Form 2 No Original Form 2 252 4.6 108 6.4 - - 360 4.0
In CommCare and MIS (not Original Form 2) - - 19 1.1 - - 19 0.2
Only in MIS - - 32 1.9 - - 32 0.4
Serenje
Original Form 2, Excel Form 2, and Form 1 - - - - 638 35 638 7.1
Original Form 2 and Excel Form 2 - - - - 746 40.4 746 8.3
Original Form 2 and Form 1 - - - - 118 6 118 1.3
Excel Form 2 and Form 1 - - - - 1 0.1 1 -
Original Form 2 Only - - - - 324 18 324 3.6
Excel Form 2 Only - - - - 3 0.2 3 -
Only Form 1 - - - - 18 0.97 18 0.2
Total 5,513 100 1,684 100 1,848 100 9,045 100 Notes: Case included in the analysis are bolded. Percentages are share of total households in that district. Percentage of total is share of total households in the sample.
Additional Inclusion and Exclusion Error Analysis for Serenje
Table C-2 examines inclusion and exclusion errors for households in Serenje that could be found only on
Paper CWAC Form 2s or only on Excel Form 2s. The inclusion error rate among these households was
similar to the rate among the other Serenje Households, at 3.5% (N=16). However, the exclusion error
rate among these households is higher, at 51.9% (N=241).
Table C-2. Inclusion and Exclusion for Households with Only One Version of Form 2 Serenje
N %
Inclusion Error (No Excel Form 2 Available) 16 3.5
Exclusion Error (No Excel Form 2 Available) 241 51.9
Exclusion Error (No Original Form 2 Available) 3 0.7
Correct Inclusion (No Excel Form 2 available) 102 22
Correct Exclusion (No Excel Form 2 available) 1 0.22
Correct Inclusion (No Original Form 2 available) 83 17.9
SCT Form 1 Only 18 3.9
Total 464 100
91
ANNEX D. ADDITIONAL FINDINGS FROM QUALITATIVE INTERVIEWS
WITH POTENTIAL BENEFICIARIES
In addition to questions related to the method of enumeration used, interviews with potential
beneficiaries also covered general questions about their knowledge and impressions of the enumeration
process and of the SCT program. These interviews revealed that while potential beneficiaries’ knowledge
of the process was limited, their impressions were generally positive (Table D-1).
Table D-1. Potential Beneficiary Interview Findings
N (%)
Person who told beneficiary about enumeration
Was not told about enumeration 26 (56.52)
CWAC member 17 (36.96)
Other community leader 2 (4.35)
Family member 1 (2.17)
Reason for enumeration
Don't know why 15 (31.91)
Know why - no reason given 4 (8.51)
Assessment for government assistance 15 (31.91)
Find out problems/living conditions 7 (14.89)
Verification of problems/living conditions 2 (4.26)
Register households 2 (4.26)
Check on status of children 1 (2.13)
Other 1 (2.13)
Feeling about enumeration questions
Good 40 (86.96)
Mixed 3 (6.52)
Nervous 1 (2.17)
Neutral 2 (4.35)
Knew of SCT program
No 26 (57.78)
Yes 19 (42.22)
Thoughts on how HHs qualify
Don't know 23 (52.27)
Choose disabled / poor / elderly 15 (34.09)
CWAC registers those in need 5 (11.36)
CWAC registers, District approves 1 (2.27)
Fewer than half (43.5%) of the beneficiaries were notified of the enumeration interview prior. Those who
had been notified were mainly told by the CWAC member. Beneficiaries reported numerous reasons for
enumeration. Fifteen of the beneficiaries (31.9%) did not know why they were being enumerated.
Another 15 (31.9%) reported that it was to assess their living conditions and tied it to government
92
assistance, while seven others (14.9%) just mentioned that it was to assess their living situation. Notably,
two beneficiaries (4.3%) thought it was to register households for the program.
The majority of respondents (N = 40, 87.0%) felt positively about the questions that were being asked.
Many noted that they felt they had demonstrated that their household was in need of assistance. Others
had mixed feelings, highlighting discomfort with specific questions. One woman said, “I had some
problems being asked some personal questions. I was asked if I am staying with my husband. I felt bad
because we are on separation.” Another noted that, “The one about children gave me a bad feeling in
that it reminded me of my two children who are disabled.” One beneficiary reported, “The method of
getting personal information is worrying. We were worried; we thought they were going to get our land.”
In general, respondents were not aware of what the SCT program was, with 26 (57.8%) potential
beneficiaries saying they did not know what the SCT program was. Additionally, 23 (52.3%) reported that
they did not know how households qualified. As one beneficiary said, “I don’t know how [households
qualify]. In this village, they include people who are not poor, leaving out those who are.” Another
reported that “I have no idea other than thinking that they are just randomly chosen from those that are
registered.” Others reported that households that were poor or had disabled or elderly people were
selected. Five (11.4%) beneficiaries said that CWAC members register the households; only one person
(2.3%) correctly said the CWAC registers households that then go through an approval process by the
district. Though respondents who knew something about the program generally knew that it was targeted
at poor households, many appeared to believe that the process was subjective. When asked how
households qualify, one respondent said, “They look at how the person looks and the way they live.”
Another beneficiary said, “I wonder whether it's the machine that excludes people from getting registered
or if it is the CWAC and government officials.”
None of the beneficiaries were negative about the SCT program or about the way in which households
qualified. One respondent noted, “It is a program of helping poor people. Those who have been receiving
help have now improved. At least now they can afford to get food.” Another said, “The government is
doing a good thing of helping the poor. That help improves people’s lives.” It was clear that some
beneficiaries thought the interview meant that they would start receiving money. When asked if he had
been aware that the interview was going to occur, one respondent said, “Yes, I was told by a CWAC
member. I was expecting to find assistance in terms of money to help me buy fertilizer.” Another
beneficiary said that he was expecting to start getting paid by the government as a next step. Other
respondents, however, understood that the interview was not a guarantee. One respondent noted, “I am
waiting for the response just as the CWAC told me. He told me that when you are registered, it does not
mean you have automatically qualified to be on the program.”
93
ANNEX E. CWAC FOCUS GROUP DISCUSSIONS
In addition to inquiring about the role of the CWACs in the enumeration process and their perceptions of
using mobile technology in enumeration, the evaluation team used meetings with the CWACs to ask
broader questions about their thoughts of the overall SCT program and how they viewed their roles.
Each focus group viewed the overarching role of the CWAC as being the eyes of the government: “We act
as their eyes and we also look at those people that really lack in their living conditions and are identified
as aid seeking.” Another Shiwa N’gandu member said, “The CWAC is a group of people that is directly in
contact with the community and knows the vulnerable in these communities. The government is not
directly close to the community. The CWAC is there to help people that are vulnerable.” This role as the
intermediary between the government and the community manifested itself in the following tasks:
identifying households that fit the eligibility criteria, accompanying the enumerators, communicating
messages from the government within the community, following up with households to see how they
were progressing and spending the money, and informing households when to expect payment.
It is clear that CWAC members believe in what they do and are motivated by a desire to help those who
are less fortunate. One member in Shiwa N’gandu said, “It’s something that is easy in helping our fellow
people in our community, because we are helping people that are less privileged in our society. They are
people in our communities that cannot be heard, and hence, it’s through us that they are helped.” Another
member noted, “We have a lot of school-going children in our community that actually have no one
capable of supporting them. I joined the CWAC with the thought of being provided a platform to speak
for such people.”
The fact that all CWAC members served voluntarily emerged as a common theme between discussion
groups. CWAC members took pride in the fact that they were volunteers – assuming the role either from
the direct invitation of community leaders or of their own initiative. Some went further to express that
this was a necessary component of their mandate to help people. One member from Shiwa N’gandu
noted, “Being a CWAC member, you have to be a volunteer - it has to go together with Christianity.” This
nature of volunteerism, however, also had its downsides. One CWAC member noted that the larger
community did not believe that an individual would be willing to work at that intensity for free, and
therefore assumed that CWAC members were highly paid. One member in Shiwa N’gandu said,
“Sometimes it becomes very annoying because you get to be insulted for work that you don’t get paid
for.”
This highlights the tension between the empowerment of the CWAC members who perceive themselves
as extensions of the Social Welfare department and their lack of decision-making power within the
program. Many CWAC members noted that they felt they lacked the tools to do their jobs well.
Specifically, they cited the lack of transport and supplies (such as gumboots and raincoats) as major
challenges. Beyond logistical considerations, CWAC members expressed that the enumeration process, in
some ways, undermined their role. “The people that we listed down were really vulnerable to the extent
where even if you personally decided to visit them, you would confirm their vulnerability, but they were
94
left out, and so they bitterly complained because others who were selected seem to be far better in their
living.” As another member of Shiwa N’gandu noted, “There is need of trusting that the CWAC members
are actually doing the right thing when they are in the field, especially in terms of what they say about the
vulnerability of some households, but instead…they don’t trust the CWAC [and wonder] how come they
use the computer in determining who qualities for enumeration and who doesn’t.”
Indeed, CWAC members often cited that the response from the community is mixed. Some noted that
community members were very grateful, whereas others raised the concern that the CWAC gets blamed
by those who don’t qualify. One member said, “Others are very happy with the CWAC but others are not,
and CWAC members are insulted because these people think they were not selected to be part of the
program because the CWAC members decided to leave them out.”
This sentiment points to a larger challenge of lack of transparency around the system. “People don’t
understand the amounts they were being given because others were getting 140 ZMW and others 280
ZMW.” Another member said, “We have noticed that we are given different instructions for every
enumeration which l think ends up confusing us as CWAC members and gives a bad impression on us
towards the community members.”
To conclude, CWAC members took pride in their role as the government’s local representative and were
driven in their larger mission of serving their communities. Many CWACs, however struggled with the lack
of transparency in the SCT program, facing questions from the community regarding their roles as well as
perceived inequalities in the cash-transfer program. Finally, while CWAC members generally felt
motivated and empowered in their roles, many felt they lacked the autonomy and support needed to be
fully effective in their roles.
95
ANNEX F. TRANSCRIPT ANALYSIS
Results from the transcript analysis were presented for a select group of questions in the body of this
report and for all survey questions in the tables below (Table F-1 & Table F-2).
Table F-1. Unique Translations Used per Enumerator
Paper M-tech
Question Mean SD Mean SD P-value
Number of Distinct
Translations
3.01 Material of Roof 3.50 1.29 2.64 1.50 0.31 15
3.02 Material of Floor 3.25 1.50 3.00 1.65 0.79 13
3.03 Material of Walls 2.25 0.50 3.22 1.79 0.16 11
3.04 Energy for Light 2.50 0.58 2.23 1.17 0.55 8
3.05 Energy for Cooking 2.75 0.96 3.31 1.44 0.40 9
3.06 Main Toilet Facility 2.50 1.73 2.46 1.33 0.97 11
3.07 Own Item 1.25 0.50 1.46 0.52 0.50 3
Clock 1.25 0.50 1.00 0.00 0.39 2
Radio 1.75 0.50 1.62 0.65 0.68 3
Iron 1.00 0.00 1.08 0.28 0.34 3
TV 1.00 0.00 1.15 0.38 0.17 2
Bike 1.00 0.00 1.00 0.00 1
Bed 1.50 1.00 1.15 0.38 0.54 3
Mattress 1.00 0.00 1.00 0.00 1
Sofa 2.25 0.50 2.54 0.78 0.41 5
Chair 2.00 1.15 2.42 1.08 0.55 5
Table 2.00 0.00 1.85 0.38 0.17 3
Ox Cart 1.50 1.00 1.23 0.60 0.64 4
Plough 2.00 1.00 1.46 0.78 0.46 4
Ox Drawn Harrow 1.67 0.58 2.43 1.13 0.20 12
Hammer Mill 1.00 0.00 1.08 0.29 0.34 2
Pump 1.50 0.71 1.89 1.17 0.59 6
Boat 1.00 0.00 1.15 0.38 0.17 3
Net 1.00 0.00 1.62 0.65 0.01 3
Axe 1.00 0.00 1.08 0.28 0.34 2
Hoe 1.00 0.00 1.00 0.00 1
3.08a Land Owned 3.75 2.36 3.69 1.44 0.97 20
3.08b Land Cultivated 3.75 2.36 3.00 1.08 0.58 25
3.09 Length of Time Lived in Place 3.25 1.26 2.69 0.85 0.45 12
3.10 Animals Owned 2.25 0.96 2.15 0.69 0.86 4
Cattle 1.00 0.00 1.00 0.00 1
Horse 2.00 1.15 2.38 0.77 0.57 4
Goat 1.00 0.00 1.08 0.28 0.34 2
Sheep 1.00 0.00 1.08 0.28 0.34 3
Pig 1.00 0.00 1.08 0.28 0.34 3
Poultry 1.25 0.50 1.31 0.48 0.85 2
3.11 Benefit from Any Program? 4.00 1.41 2.23 1.24 0.08 10
Public Welfare Assistance Scheme (PWAS) 1.25 0.50 1.36 0.67 0.73 4
Food Security Pack 1.75 0.50 1.88 0.99 0.78 8
96
Fertilizer 1.75 0.50 1.73 0.65 0.94 5
Women's Empowerment Fund 2.00 1.00 1.67 0.71 0.63 4
School Bursary 2.50 0.71 2.22 0.97 0.69 7
Social Protection Fund 1.50 0.71 1.50 1.22 1.00 5
3.12 Meals in a Day 2.75 0.50 2.00 1.15 0.09 7
3.13 No Food Because of Lack of Resources 4.25 1.50 2.77 1.36 0.14 13
3.14 Sleep Hungry 3.25 0.50 3.77 1.54 0.31 19
3.15 Whole Day and Night with No Food 2.33 1.53 2.92 1.00 0.58 12
Notes: P-values are based on a two-sample comparison of means using a Student’s t-test with unequal variance.
Table F-2. Inaccurate, Unclear, or Leading Translations used per Question
Paper M-tech
Question N % N % p
2.04 Date of Birth 7 7.1 10 7.1 0.98
2.05 Orphan hood Status 34 34.3 36 25.7 0.15
2.06 Attended School 13 13.1 22 15.7 0.58
2.07 Grade Attained 6 6.1 13 9.3 0.36
2.08 Fit for Work 3 3 3 2.1 0.67
2.09 Disability 31 31.3 27 19.3 0.03
2.10 Chronic Illness 45 45 23 16.4 <0.01
3.01 Material of Roof 43 55 39 42.4 0.10
3.02 Material of Floor 70 84 82 78.8 0.34
3.03 Material of Walls 14 19.4 21 23.3 0.55
3.04 Energy for Light 5 5 7 5.2 0.97
3.05 Energy for Cooking 40 42 57 44.2 0.76
3.06 Main Toilet Facility 74 85 94 79 0.27
3.07 Own Item
Clock 0 0 0 0
Radio 0 0 0 0
Iron 1 1.1 0 0 0.23
TV 0 0 0 0
Bike 0 0 0 0
Bed 0 0 0 0
Mattress 0 0 0 0
Sofa 0 0 0 0
Chair 0 0 0 0
Table 3 4 0 0 0.02
Ox Cart 2 3 1 0.8 0.23
Plough 0 0 0 0
Ox Drawn Harrow 9 37.5 24 40.7 0.79
Hammer Mill 0 0 0 0.0
Pump 0 0 1 1.4 0.46
Boat 1 2 0 0 0.15
Net 0 0 0 0
Axe 0 0 0 0
Hoe 0 0 0 0
3.08a Land Owned 50 54 72 55.4 0.88
3.08b Land Cultivated 26 35 40 37 0.74
3.09 Length of Time Lived in Place 2 2 0 0 0.09
3.10 Animals Owned 0 0 0 0
97
Cattle 0 0 0 0
Horse 0 0 0 0
Goat 0 0 0 0
Sheep 0 0 0 0
Pig 0 0 0 0
Poultry 0 0 0 0
3.11 Benefit from Any Program? 19 19.8 17 12.8 0.15
PWAS 1 2.3 6 9.4 0.14
Food Security Pack 6 14.6 6 12.0 0.71
Fertilizer 1 1.7 0 0 0.23
Women's Empowerment Fund 0 0 0 0
School Bursary 3 6.4 2 2.5 0.28
Social Protection Fund 1 3.1 3 7.1 0.45
3.12 Meals in a day 9 9.2 18 13.4 0.32 3.13 No Food Because of Lack of Resources 35 42.2 47 42.0 0.98
3.14 Sleep Hungry 45 52.9 59 52.2 0.92 3.15 Whole Day and Night with No Food 33 55 57 62.0 0.39
Note: P-values were calculated using Pearson’s chi-squared test.
98
ANNEX G. COST MODEL
The evaluation team constructed a cost model to compare differences in the costs associated with the
scale-up of m-tech and paper enumeration methods, as well as the long-term cost implications of a
transition from paper to m-tech. These models are included below.
Overall, the model found that the cost dimension of the paper versus m-tech decision is negligible. The
budgetary difference between paper and m-tech, in terms of both the scale-up and the annual running
costs, is small relative to the overall size of the program.
Scope The aim of this model is to compare the costs of enumeration between the two methods and therefore
does not include many programmatic costs of the SCT scale-up that are not directly related to the
enumeration exercise.
Data
The evaluation team requested budgets and expenditure data from District and Provincial Social Welfare
Offices, as well as from cooperating partners involved in direct technical support of the SCT. Cost
estimates were used in situations where stakeholders could not provide exact figures.
Key Assumptions
MCDMCH Policy Assumptions
MCDMCH will decide to scale up the SCT program to the remaining 53 districts over two years; scaling to 30 districts in 2016 and 20 in 2017.
Social Welfare Officers will conduct retargeting for the SCT program in districts once every three years.
Timing, Logistical, and Geographic Assumptions
Enumeration will only take place during the dry season – approximately 7 months out of the year.
Enumerators will spend, on average, 15 days in the field during an enumeration period.
50 CWACs will be enumerated in each district.
The evaluation team observed an average of 43 CWACs enumerated across Shiwa N’gandu,
Mwense, and Mungwi, and 19 CWACs enumerated in Serenje. Since all three of these
enumeration activities were mop-up exercises from previous enumerations, the figure was
rounded up to 50.
Personnel Assumptions
Each district has 3 staff members: A District Social Welfare Officer, an Assistant District Social
Welfare Officer, and an Assistant Program Officer
99
Provinces have two staff members who work on the SCT program: A Provincial Social Welfare
Officer, and a Senior Provincial Social Welfare Officer.
Paper enumeration staff needs assumptions:
o All three district staff spend the full 15 enumeration days working on enumeration, since
they are responsible for logistics and data quality.
o All three district staff also spend three days after enumeration checking forms.
o No provincial staff are involved in enumeration.
M-tech enumeration staff needs assumptions
o All three district staff spend half of their time during the enumeration period working on
enumeration, since they are primarily responsible for logistics.
o Both provincial staff each spend four days reviewing data during the enumeration
process.
Each district employs one driver who works throughout the enumeration.
Process costs assumptions
Software development costs are born entirely at the beginning of the process.
Training
o Additional training for all HQ, provincial, and district staff is needed at the start of scale-
up.
o Paper enumerator training is 75% as expensive as m-tech training, to account for extra
time needed to learn m-tech skills.
Paper process costs eliminated from m-tech include:
o Quality assurance costs.
o Transport of paper forms to Lusaka.
o Entry of paper forms.
Equipment & Other Cost Assumptions
Tablets for enumeration will be grouped in “sets” that are managed by the provincial office.
Each set of tablets will contain 25 devices – the number required to enumerate an entire district.
11 sets of tablets will be needed throughout the country for retargeting activities.
o If retargeting takes place over a three-year cycle, each province will need to re-enumerate
1/3 of its districts every year. All provinces can conduct this activity using one set of
tablets – with the exception of Western Province, which would require two sets of tablets.
Each district office will be given three tablets - one for each district staff member.
Each provincial office will be given three tablets - one for each staff member who works full time
on the SCT program and one extra.
Social Welfare Department HQ will be given eight tablets - one for each staff member.
The depreciation cost of the district vehicle billable to the enumeration process was included.
Other equipment required for both paper and m-tech enumeration includes:
o Backpacks
o Notebooks
100
o Pencils
o Vehicle depreciation
o Allowances for CWAC members
o Fuel
Other equipment required for m-tech enumeration includes:
o Equipment maintenance
o Power strips for district offices
o Internet modems for district offices
o External battery packs for enumerator tablets
o Locked cabinets for storage in provincial offices
o Tablet covers and screen protectors
o Paper CWAC Form 2s
o Notebooks
o Backpacks
Equipment and inputs required for paper enumeration only includes:
o Paper CWAC Form 2s and SCT Form 1s
o Clear bags
101
Table G-1. Paper Enumeration, Cost of 2016 -2017 SCT Scale-Up
Unit Cost Units Total Cost
Running Costs of Enumeration (per District)
Preparation
Printing CWAC Form 2 ZMW 0.26 1000 ZMW 259
Printing SCT Form 1 ZMW 4.71 4200 ZMW 19,777
Enumerator training ZMW 58,266 1 ZMW 58,266
Enumeration
Provincial staff costs ZMW 500 0 ZMW 0
District staff (acting as supervisors) costs ZMW 350 54 ZMW 18,900
Vehicle use ZMW 20,000 1 ZMW 20,000
Enumerator DSA ZMW 550 300 ZMW 165,000
District staff (acting as supervisors) DSA ZMW 600 6 ZMW 3,600
Driver DSA ZMW 320 8 ZMW 2,560
Missing lunch / transport for enumerators ZMW 150 120 ZMW 18,000
Quality assurance ZMW 7,000 1 ZMW 7,000
Pencils ZMW 1 20 ZMW 20
Clear bags ZMW 11 20 ZMW 220
Notebooks ZMW 10 20 ZMW 200
CWAC member compensation ZMW 18,627 1 ZMW 18,627
Fuel ZMW 9,017 1 ZMW 9,017
Backpacks ZMW 75 20 ZMW 1,500
Data Entry
Form transportation to Lusaka ZMW 4,000 1 ZMW 4,000
Data entry ZMW 5 3500 ZMW 16,481
Total Cost for 1 District ZMW 363,427
Totals
2016 Costs
Scale-up to 30 districts ZMW 363,427 30 ZMW 10,902,801
2017 Costs
Scale-up to 23 districts ZMW 363,427 23 ZMW 8,358,814
Total Cost of Scale-up
Scale-up to 53 Districts ZMW 19,261,616
102
Table G-2. M-tech Enumeration, Costs of 2016-2017 SCT Scale-Up
Unit Cost Units Total Cost
One Time Scale-up Costs
Technical Equipment
Tablets ZMW 1,417 622 ZMW 881,546
Covers ZMW 114 622 ZMW 70,611
Screen protectors ZMW 66 622 ZMW 40,827
External battery packs ZMW 100 275 ZMW 27,500
Internet modem ZMW 420 103 ZMW 43,260
Locked cabinet ZMW 2,500 10 ZMW 25,000
Power strips ZMW 40 309 ZMW 12,360
Air freight & shipping ZMW 87 622 ZMW 54,157
Technical Services
Software provider ZMW 600,000 1 ZMW 600,000
Training for HQ and provincial staff ZMW 100,000 1 ZMW 100,000
Total one time technical costs ZMW 1,855,262
Running Costs of Enumeration (per District)
Preparation
Print CWAC Form 2 ZMW .26 1000 ZMW 7,875
Enumerator training ZMW 77,688 1 ZMW 77,688
Enumeration
District staff (acting as supervisors) time ZMW 350 23 ZMW 7,875
Provincial staff time ZMW 500 8 ZMW 4,000
Vehicle use ZMW 20,000 1 ZMW 20,000
Enumerator DSA ZMW 550 300 ZMW 165,000
District staff (acting as supervisors) DSA ZMW 600 6 ZMW 3,600
Driver DSA ZMW 320 8 ZMW 2,560
Internet bundle (district office) ZMW 1,200 1 ZMW 1,200
Internet bundle (enumerators) ZMW 6,000 1 ZMW 6,000
Missing lunch / transport for enumerators ZMW 150 120 ZMW 18,000
Notebooks ZMW 10 20 ZMW 200
Pencils ZMW 1 20 ZMW 20
CWAC member compensation ZMW 18,627 1 ZMW 18,627
Fuel ZMW 9,017 1 ZMW 9,017
Backpacks ZMW 75 20 ZMW 1,500
Equipment Maintenance
Tech maintenance ZMW 1,069 1 ZMW 1,069
103
Total Cost for 1 District ZMW 336,615
Totals
2016 Costs
Total one time technical costs ZMW 1,855,262 1 ZMW 1,855,262
Total cost of scale-up to 30 districts ZMW 336,615 30 ZMW 10,098,450
2017 Costs
Total cost of scale-up to 23 districts ZMW 336,615 23 ZMW 7,742,145
Total Cost of Scale up to 53 Districts ZMW 19,695,856
104
ANNEX H. COOPERATING PARTNERS
The SCT program benefits from the support of an engaged group of cooperating partners. The diversity of
this group allows the CPs to be flexible and responsive to MCDMCH’s needs. This diverse group of
institutions provides three types of assistance: 1) donor support, 2) policy guidance, and 3) technical
expertise.
Donor Support
DFID, Irish Aid, and the Embassy of Finland currently provide donor support to the SCT program. Most of
this support comes in the form of technical assistance directed through UNICEF, with some additional
resources channeled directly to MCDMCH. DFID, as the lead donor, plays a role in coordinating these
efforts.
United Kingdom Department for International Development DFID
DFID is mid-way through their 10 year (2010-2020) £38 million commitment to supporting the SCT
program. The long time horizon of this engagement gives DFID the flexibility to adapt their support to
changing program priorities, design, and structure.
DFID financial support takes two primary forms: technical assistance, which is provided directly and also
channeled through UNICEF, and direct support to procure equipment such as computers, vehicles, offices
supplies, and materials for enumerators and CWACs. DFID’s strategic goal is to facilitate the nationwide
scale up of the SCT program by supporting the development of a robust program that can function
nationwide. To that end they have focused on supporting the development of key systems such as the
targeting mechanism; registration system; payment system; and MIS.
DFID’s support extends beyond financial and procurement assistance, and includes both high-level policy
support and direct technical assistance to MCDMCH. A DFID staff member spends two days per week
working from MCDMCH and providing direct assistance with communications strategy development,
financial planning and budgeting, and program planning.
Embassy of Finland
The Embassy of Finland’s involvement for the Social Protection Activities in Zambia, and for the SCT
program specifically falls under the umbrella of their activities related to Governance and Accountability
in Zambia. As a small embassy, they channel most of their support through UNICEF, though they do
provide some direct assistance to MCDMCH, as well. Support for the SCT program is embedded within the
Embassy’s broader efforts to support social welfare in Zambia, which it intends to scale up their support
across this portfolio over the coming years.
Irish Aid
Irish Aid’s engagement of the SCT program falls under their support of improvement of delivery of social
services and of nutrition and food security in Zambia. Irish Aid channels support for capacity strengthening
105
for the SCT through UNICEF and also provides direct support to MCDMCH. Irish Aid also supports the
Platform for Social Protection, in an effort to strengthen the role of civil society advocacy and monitoring
of the SCT program and its policies.
The donor representatives interviewed expressed support for the SCT program. Donors appreciate the
growing evidence base for cash transfers and the positive results of the impact evaluation of the SCT
program in Zambia.
To that end, donors are prepared to support technical innovations to the SCT program. In the event
government takes the decision to scale up with an m-tech solution, the donors would be willing to provide
financial support to facilitate this roll out.
Donors noted that they are particularly well-positioned to provide support for one-off costs of designing
and building a robust system, such as procurement of tablets and other equipment, technical assistance,
and capacity building and training for government officials. Providing this type of programmatic support
to the SCT program falls within existing donor frameworks and agreements and would not require any
changes in policy. Furthermore, existing donor procedures can be used to manage this support, with
technical assistance continuing to be primarily channeled through UNICEF, and procurement support
would continue directly with MCDMCH.
Policy Support
International Labor Organization
ILO engages alongside other CPs to provide policy support for Zambian Social Protection Policy. ILO’s
primary activities in Social Protection are to provide demand driven technical advice to support national
policy-making. To date they have not been heavily engaged in the SCT program, but they continue to
participate in donor coordination meetings and remain available to provide assistance on emergent
questions. Since the ILO is not currently involved in direct technical support to the SCT program, the
method of enumeration for an SCT scale-up would not affect their activities. However, ILO plans to
continue to monitor the SCT program and remain available to provide policy guidance and support when
their expertise can be useful.
Platform for Social Protection-PSP
The evaluation team was not able to meet with representatives from the Platform for Social Protection
due to scheduling conflicts.
Technical Support
UNICEF
UNICEF has been engaged in providing technical and policy assistance to the SCT program since its
inception in 2004. As the primary channel for financial support for the other CPs, UNICEF’s assistance has
supported a range of MCDMCH activities from direct logistical support; to the procurement of training,
106
evaluation, and quality assurance services; to high-level policy guidance. During Rounds 1 and 2 of
enumeration in 2014, UNICEF worked closely with counterparts at MCDMCH, providing direct assistance
on a variety of logistical issues from photocopying paper forms to procuring the services of a quality
assurance team.
If MCDMCH decides to scale up with m-tech enumeration, UNICEF would be uniquely positioned to
provide assistance on a variety of logistical challenges that may arise with the scale-up. The nature and
level of UNICEF’s engagement would not change, but UNICEF’s focus would pivot to support of SCT scale-
up with m-tech.
World Food Programme (WFP)
Because of their demonstrated expertise in implementing mobile-data solutions, WFP’s support of the
SCT program focuses on areas where technological solutions can play a role in improving program
functions and outcomes. To date this has included an active role in the group of partners working to
procure a Payment Service Provider to administer electronic payments for the SCT.
WFP took the lead role in organizing the m-tech pilot, procuring the services of the software developers,
leading trainings, and taking on nearly all of the logistical tasks associated with roll out of a mobile data
collection platform including procuring SIM cards, talk time, and other accessories.
If MCDMCH decides to scale up with m-tech, WFP will continue to be an engaged partner, sharing
knowledge and lessons learned from the pilot and helping to formulate all stages of the government’s
work plan for a scale-up. However, at the same time, WFP will aim to step back and allow for more
ownership from MCDMCH in the execution of the program. WFP’s engagement in the search for a
payment service provider and deep understanding of the m-tech solution make it uniquely positioned to
provide coordination support between enumeration and payment services, a role they are likely to
assume in the future.