What a path of discovery Quest Mindshare undertook in 2021, as we started investigating how certain types of data were affected during the course of a typical online survey. From the Quirk’s Virtual Global Event (February 23) to a final wrap up last week with ESOMAR, but this won’t be the last you hear on the topic from us. As Quest’s co-founder and Managing Partner Greg Matheson mentions in his opening statement, the ESOMAR Webinar is indeed our final installment in our reporting series for 2021. But data quality and how it changes for “longer” surveys is a “forever question” in our minds, with so much more to look into in 2022.
After four identically implemented waves of our original survey, Quest has solid conclusions we stand behind – and many other hypotheses and questions going forward. The Webinar on December 2nd focused on a fourth wave of our original survey, as well as a newly experimental fifth wave, for the details elaborated here.
Our primary mission at the outset was a simple one: as the length of an online survey increases, what happens to respondent engagement and data quality? We found no definitive work the market research industry had completed to address this question.
To be clear, we are not attacking “long” surveys. Quite the contrary! Long surveys have a necessary place for so many research applications which require an extended respondent engagement. They are not going away, and we don’t advocate they should.
What we want to do is better understand what happens for respondents and the answers they provide during surveys to arm researchers with more knowledge about relevant issues for data quality and reliability. For example, that could mean developing a metric as direct as “data collected at the eight-to-ten minute interval in a typical consumer survey is 4% less accurate than data collected at the six to eight minute interval for rating and ranking questions”. Or it could end up being something entirely more complicated and different.
We pursued our investigation using a consumer primary grocery shopper survey, following these steps consistently across the waves:
1. Members of Quest’s online U.S. consumer panel were invited to the survey, qualified for “regular grocery shopping for the household”, and balanced to the demographics of a representative national sample for this population
2. Categories for usual purchases were recorded, such as “milk and dairy” and “fruits and vegetables”.
3. A “measurement battery” of four questions was asked about a minute into the survey, and then at later intervals, randomizing which of the categories were presented as the subject for the four questions.
4. In between the measurement sections, relevant “filler questions” about grocery shopping were presented to establish the time between the measurements.
5. The primary goal was to focus on how engaged the respondents were – the amount of time spent for the overall section and per measurement question, as well as the number of words and content for open ends, as respondents progressed through the survey.