Evaluating the Combination of Questionnaire Pretest Methods in a Field Pretest of a Longitudinal Household Survey in Israel
Galit Gordoni
Building: Law Building
Room: Breakout 11 - Law Building, Room 107
Date: 2012-07-12 01:30 PM – 03:00 PM
Last modified: 2012-05-01
Abstract
Cognitive testing has become a common pretest procedure for identifying possible causes of measurement error. However, empirical evidence concerning the usefulness of combining methods to data quality improvement are still limited (Groves, Fowler, Couper, Lepkowski, Singer & Tourangeau, 2009).
The present study demonstrates the advantages of using a combination of cognitive testing methods in different stages of a field pretest, in terms of improving the decision-making process of questions revisions as well as data quality. The study is part of a field test of a new longitudinal survey in Israel conducted by the Central Bureau of Statistics, whose first wave’s fieldwork is scheduled to begin in July 2012. Based on recommended standards for pretesting new questions (Census Bureau’s Methodology and Standards Council, 2003), and the methodological literature (Kudela et al., 2006; Levin et al., 2009; U.S. Census Bureau, 2008; Willis, 2005, Willis et al., 2008) a cognitive evaluation program was designed. A combination of evaluation methods was tailored according to the following criteria: field pretest aims and characteristics, type of questions, type of data produced by each method, and its potential added value in validating and translating cognitive testing results into question revisions and reduced measurement error.
Three methods of cognitive evaluation were used: one pre-field technique - methodological expert reviews and two field techniques - behavior coding and respondent debriefings conducted by experts. First, 34 new items were evaluated by a survey methodologist using the question appraisal system coding form-QAS 99 (Willis & Lessler, 1999). Evaluation results enabled to develop, for each tested question, a structured questionnaire, using a range of cognitive procedures, such as confidence rating, paraphrasing and probes, which focus on the problems identified in the pre-field technique.
Second, notes on respondent-interviewer interaction during the field pretest interview, were made by 16 observers, subject matter experts and data collection experts, using a standard form of behavior coding (Groves et al., 2009). Third, four experts in data collection which participated in a 3-hour training session in cognitive interviewing, accompanied interviewers into the field, and conducted 18 respondent debriefings (Nichols & Childs, 2009) after the survey interview was completed, using a structured questionnaire. Type and number of problems were compared across evaluation methods, question type (attitude, behavioral and factual questions) and report type (self-report and proxy). Data quality of the revised questions will be tested in a second field test in February 2012. Usefulness and limitations of using cognitive evaluation methods in different field pretest stages are discussed in terms of the total survey error paradigm.
The present study demonstrates the advantages of using a combination of cognitive testing methods in different stages of a field pretest, in terms of improving the decision-making process of questions revisions as well as data quality. The study is part of a field test of a new longitudinal survey in Israel conducted by the Central Bureau of Statistics, whose first wave’s fieldwork is scheduled to begin in July 2012. Based on recommended standards for pretesting new questions (Census Bureau’s Methodology and Standards Council, 2003), and the methodological literature (Kudela et al., 2006; Levin et al., 2009; U.S. Census Bureau, 2008; Willis, 2005, Willis et al., 2008) a cognitive evaluation program was designed. A combination of evaluation methods was tailored according to the following criteria: field pretest aims and characteristics, type of questions, type of data produced by each method, and its potential added value in validating and translating cognitive testing results into question revisions and reduced measurement error.
Three methods of cognitive evaluation were used: one pre-field technique - methodological expert reviews and two field techniques - behavior coding and respondent debriefings conducted by experts. First, 34 new items were evaluated by a survey methodologist using the question appraisal system coding form-QAS 99 (Willis & Lessler, 1999). Evaluation results enabled to develop, for each tested question, a structured questionnaire, using a range of cognitive procedures, such as confidence rating, paraphrasing and probes, which focus on the problems identified in the pre-field technique.
Second, notes on respondent-interviewer interaction during the field pretest interview, were made by 16 observers, subject matter experts and data collection experts, using a standard form of behavior coding (Groves et al., 2009). Third, four experts in data collection which participated in a 3-hour training session in cognitive interviewing, accompanied interviewers into the field, and conducted 18 respondent debriefings (Nichols & Childs, 2009) after the survey interview was completed, using a structured questionnaire. Type and number of problems were compared across evaluation methods, question type (attitude, behavioral and factual questions) and report type (self-report and proxy). Data quality of the revised questions will be tested in a second field test in February 2012. Usefulness and limitations of using cognitive evaluation methods in different field pretest stages are discussed in terms of the total survey error paradigm.