Data Quality and Improvement at USAID



By Cara E. Jones, PhD, Monitoring and Evaluation Manager at ORB International | September 20, 2018 

For the past several years, USAID and other USG bodies have undergone a series of improvements in the gathering, management and dissemination of evaluations, general data, and data quality. Gaining steam across all agencies, the movement towards data-driven development relies on 1) good quality data collected in a timely manner that is 2) provided to USAID and other partners who are 3) educated on the uses and potential challenges in using these data to make funding and programming choices. 

Independent studies of evaluation conducted in recent years showed a reported high usage of evaluations in some capacity (93%), and good usage at the country mission level; 59% of approved Country Development Cooperation Strategies (CDCS) referenced evaluation findings. Coupled with changes in data collection and quality, robustness of reporting, and other measures designed to improve knowledge management and facilitate learning across the agency, the agency has made changes to prioritize better quality, better management, and better decision making.  The enhanced usage of mobile and tablet-based data collection, GPS data and mapping exercises, and new data technologies continue to support these endeavors. 

The changes in our industry proffered by technology are being watched closely as our clients.  Similarly, their own learning curve of what ‘good data’ means in various contexts continues to strengthen.  Research shows that anywhere between 5% and 40% of data collected is routinely falsified by interviewers. But technology can also be used to root it out and constantly help retrain and build the capacity of local partners. 

Proprietary systems that rely on several unique and new data technologies help detect data fraud. By only allowing the use of tablets to collect data, the potential for fraud, misuse, and entry error with common paper surveying is greatly reduced. This also allows for far more rapid collection and upload of data, translating to faster study results. A series of software-driven checks on the data ensures reliability, including flags for time of day, interviewer voice, ambient noise, and duration time of the interview. This allows us to provide initial quality control on 100% of all our data collected. We then couple this with manpower-driven checks, including audio verification, GPS verification, and sampling verification. OuStaff-driven data involves extensive training and specialized staffing to ensure the data we are providing to USAID and other clients meets our own rigorous standards. 

Considering that these evaluation results are used to further build USAID programming, undefined, potentially falsified data puts the whole evaluation process in danger, as it allows funders and implementers to reach erroneous conclusions and create a vicious circle of potentially ineffective and counter-productive programming efforts. USAID’s efforts to further refine the data-driven development process deserve recognition and broad support.  
 

Related Stories
/PscImages/Council_Icons/DIC.png
Defense Spotlight December 2017 
By Amanda Swanson 12/18/2017
/PscImages/0%20Service%20Contractor%20Picture/Federal%20Services%20Contract%20Spending-page-0%20-condensed.jpg
Federal Services Contract Spending Under FY18 Appropriations
DAVID J. BERTEAU 5/1/2018
/PscImages/0%20Service%20Contractor%20Picture/shared%20service%20IMG_2397_v1.JPG
Leveraging Commercial Innovation to Drive Government Shared Services
By Dan Chenok and Jesse Samberg 11/16/2017
Related Events