Many studies in health sciences research rely on collecting participant-reported outcomes and attention is increasingly being paid to the mode of data collection. Consideration needs to be given to the validity of response via different modes and the impact that choice of mode might have on study conclusions.
(1) To provide an overview of the theoretical models of survey response and how they relate to health research; (2) to review all studies comparing two modes of administration for subjective outcomes and assess the impact of mode of administration on response quality; (3) to explore the impact of findings for key identified health-related measures; and (4) to inform the analysis of multimode studies.
A broad range of databases (for example EMBASE, PsychINFO, MEDLINE, EconLit, SPORTDiscus, etc.) were chosen to allow as comprehensive a selection as possible, and they were searched up until the end of 2004.
The abstracts were reviewed against inclusion/exclusion criteria. Full papers were retrieved for all selected abstracts and then screened again using more detailed inclusion criteria related to the measures used. Papers that were still included were reviewed in full and detailed data extracted. At each stage, abstracts or papers were reviewed by a single reviewer.
The search strategy identified 39,253 unique references, of which 2156 were considered as full papers, with 381 finally included in the review. Two features of mode were clearly associated with bias in response; however, none of the features of mode was associated with changes in precision. How the measure was administered, by an interviewer or by the person themselves, was highly significantly associated with bias (p < 0.001). A difference in sensory stimuli was also significant (p = 0.03). When both of these were present the average overall bias was < 1 point on a percentage scale. In terms of mediating factors, there was some suggestion that there was an interaction between both telephone and computer for data collection and date of publication, supporting the theory that differences disappear as new technologies become commonplace. Single-item measures were also related to greater degrees of bias than multi-item scales (p = 0.01). Individual analysis of the Short Form questionnaire-36 items and Minnesota Multiphasic Personality Inventory (MMPI) showed a varied pattern across the different subscales, with conflicting results between the two types of study. None of the MMPI measures used to detect deviant responding showed a relationship with the mode features tested. The limits of agreement analysis showed how variable measures were between modes at an individual rather than a group mean level.
The search strategy covered the period up to 2004, so any new and emerging technologies were not included. Not all potential mode features were tested and there was limited information on potential mediating factors.
Researchers need to be aware of the different mode features that could have an impact on their results when selecting a mode of data collection for subjective outcomes. Further mode comparison studies, which manipulate mode features and directly assess impact over time, would be beneficial.