The respondent experience: an industry-wide concern

If we don’t do something, we will not have any respondents left. Exploring the causes of lower survey quality and the main solutions to overcome the problem from the perspectives of market research and insights industry stakeholders.

7 min read
7 min read
the respondent experience an industry wide concern

TikTok generation

One concerned professional who has talked extensively about questionnaire design is Kantar’s Vice President of Innovation, Jon Puleston. He heads QuestionArts, an international team specialising in the design of surveys. He finds that many of his industry's questionnaires don’t stack up as good consumer experiences. “We have been collectively boring respondents with bad surveys for several decades, but in that time, the world has moved on. The TikTok generation demands more engagement, and they are voting with their feet by giving up taking part in surveys.”

Puleston describes this as an industrywide problem. He sees a general issue with the overall number of screenouts at a global industry level. “Roughly two-thirds of the time respondents attempt to do a survey, they will be screened out because they are not in the target audience or the quota they are in is full.”

He feels that the industry needs to think about how to reduce screenout levels, setting up guard rails for acceptable standards for the length of screening sections of questionnaires and how much screened-out respondents are rewarded. At Kantar, this has been a major focus to improve the utilisation of panellists, reducing screenout levels through more effective routing. “For us, this is beginning to bear fruit. Currently, the market does not sustain paying respondents reasonable incentives for being screened out, as this increases the cost of projects considerably, and clients are not prepared to pay the extra cost. This takes collective action to resolve.”

Gig workers

It can sometimes be difficult to separate technical errors from plain bad practice, observes Arno Hummerston, managing director of Amplify MR. “I have seen a response list of twelve possible answers that were actually all exactly the same response. Clearly a technical error.

However, some of the panel profiling questions respondents receive have response lists with over 60 possible responses, and then the same volume in the next question. Not a great start to their panel life experience… It then gets worse when they are asked the same questions again at a later date as part of the screening process.”

Another reason for today’s low survey quality lies in the recruitment process. Enric Cid, strategy and product director at Netquest, points out the shift from pre-recruited access panels, which consisted of a relatively fixed group of individuals invited to participate in multiple questionnaires.

“More recently, due to cost-effectiveness pressures, research agencies have evolved to adopting intercept surveys sourced from various online traffic channels, also called river sampling.” As a consequence, Cid sees a growing trend of treating survey respondents as ‘gig workers’ or independent contractors, meaning that respondents are approached on a more ad-hoc basis. This ‘gigification’ has resulted in a less engaged and increasingly transient participant pool, potentially affecting the quality and reliability of their responses. “Consequently, this has raised the threshold for implementing quality measures to identify and discard inadequate responses, which may provide short-term solutions but fails to address the underlying issue in the long run.”

ChatGPT: friend or foe?

With ChatGPT and similar AI tools making headlines, it is not unthinkable that these developments will impact surveys, whether it is in their design or in their potential use (and abuse) by bored or lazy respondents. Cid sees a clear threat there. “Some quality-scoring methodologies utilise open responses to identify and discard low-quality respondents, so the use of AI models like ChatGPT could potentially be exploited.”

Hummerston argues that “it probably takes longer to use generative AI than to write a lot of nonsense, even in online qualitative studies”. Guilbert, a consultant in market research methodologies and member of ESOMAR’s Professional Standards Committee, agrees that bored participants will sooner skip a question or a survey altogether, but he does fear that ChatGPT may be used by some respondents to provide a long, well-argued answer to a complex, open-ended question. “Let’s see at the end of the year if we detect changes in open-ended questions, especially in B2B.”

Like Guilbert, Pettit suspects that bored participants will not go out of their way to use third-party tools to create more relevant, believable answers. However, she expects ChatGPT and other generative AI tools to be used by large-scale fraudulents to rapidly generate hundreds or thousands of realistic answers that generate incentives. “This is the real danger. Panel companies have a near impossible task of keeping ahead of them.”

Puleston sees an AI potential to make surveys a lot more conversational, but he adds that using ChatGTP to phrase questions is dangerous territory. “It simply regurgitates the platitudes in which survey questions have been written in the past, using the over-earnest research speak that respondents don’t feel connected to. The bigger issue is how it may be used to fraudulently complete surveys. We are already working on developing defences against this.”

Granny test

In the online age, finding respondents for short and simple questionnaires is relatively easy. Making these quick surveys appealing is a far bigger challenge, though, warns Guilbert.

“Attractiveness doesn't mean overloading them with a patchwork of font sizes, colours, animations and long texts. As with websites, modern questionnaire design must be simple and visually appealing to collect good data.”

Gamification, he believes, should be used with caution to avoid putting off certain respondents. “The principle of surveys is to attract a wide audience with varied profiles, not to focus on a specific target.”

Cid agrees that gamification is not a magic wand. Usability and clarity should be enhanced in general, and he describes improving questionnaire response quality as a multifaceted endeavour. “If we want to increase respondents’ willingness to participate, factors such as contributing to society, helping others, and providing appropriate incentives need to be considered in the value proposition.”

According to Hummerston, testing is crucial. “Would your granny be able to do the survey and understand it?” However, he adds that the questionnaire is only one element of a complex interactive ecosystem that people need to navigate. “They will only generate good quality data if they are motivated to and believe there is value in what they are doing – either for themselves or in the benefit the data will bring more broadly.” He, therefore, calls for improving the entire ecosystem – from recruitment to incentives to communications. “This would help recruit and retain people and ensure they are honest and accurate. Treat them like people, not respondents. And maybe also implement minimum wage levels. End-clients should play their part by paying a reasonable amount for data.”

Raising the bar

If the industry does not solve the poor user experience problem, it will end up with too few people who are prepared to take surveys, stresses Hummerston.

Fostering trust and emphasising the benefits of participation can contribute to better outcomes in market research.” Guilbert urges the industry to stop thinking of respondents as potential criminals and instead consider them as likely victims of long and boring questionnaires. “We absolutely need respondents, and we need to treat them better to obtain high-quality data,” Pettit argues that, unless one is new to the industry, poor data quality is not a new issue. “The first conference I ever went to, some twenty years ago, shared data from a massive study comparing levels of poor data quality across ten different panels. Unfortunately, rather than improving questionnaires so that people enjoy answering them, we’ve decided it’s easier to keep studying poor data quality until we find an answer we do like. We’ve had the answers for decades. Let’s implement them.”

Curious to know what else the industry experts have shared about respondent experience? Download our Global Market Research 2023 report now!

Lilas Ajaluni
Market Intelligence Analyst at ESOMAR