Using Paradata for Interviewer Data Quality Monitoring
Nicole G Kirgis
Building: Holme Building
Room: Holme Room
Date: 2014-12-08 01:30 PM – 03:00 PM
Last modified: 2014-10-31
Abstract
Extensive use of paradata from a sample management system, organized into a production monitoring dashboard, allows for the conceptualization and implementation of design features that respond to survey conditions in real time, so called “responsive designs” (Groves and Heeringa, 2006). A production monitoring dashboard uses information about data collection field work to help guide alterations in field protocols during survey data collection to achieve greater efficiency and improvements in data quality.
In addition to the paradata collected by the sample management system, paradata from audit trails, the record of actions and entries within the CAPI questionnaire, can be used for data quality monitoring at the interviewer level. Audit trail data include a record of every key stroke and the time spent between key strokes. Using these data, a data quality dashboard is created in order to monitor data quality at the interviewer level. Indicators include the average time spent on survey questions, the frequency of using help screens, recording remarks, checking errors, backing up in the interview, and the frequency of “don’t know” and “refuse” responses.
This presentation will discuss design and management strategies for using paradata for responsive design in survey operations to improve survey outcomes as well as discuss the implementation of the interviewer-level data quality dashboard. Examples provided will show how this data monitoring technique has been used to identify and address interviewer data quality concerns.
In addition to the paradata collected by the sample management system, paradata from audit trails, the record of actions and entries within the CAPI questionnaire, can be used for data quality monitoring at the interviewer level. Audit trail data include a record of every key stroke and the time spent between key strokes. Using these data, a data quality dashboard is created in order to monitor data quality at the interviewer level. Indicators include the average time spent on survey questions, the frequency of using help screens, recording remarks, checking errors, backing up in the interview, and the frequency of “don’t know” and “refuse” responses.
This presentation will discuss design and management strategies for using paradata for responsive design in survey operations to improve survey outcomes as well as discuss the implementation of the interviewer-level data quality dashboard. Examples provided will show how this data monitoring technique has been used to identify and address interviewer data quality concerns.