Beyond the Hype: What AI Can (and Can’t) Do for Market Researchers

9 July

AI is increasingly seen as a solution for research, but a ScienceDirect report reveals that collaboration with human researchers is still in early stages, especially in thematic analysis.

5 min read

Cutting Through the Noise: The AI Reality Check

Everywhere you turn, AI is being touted as the panacea, faster insights, smarter data, less human error. But behind the headlines lies a more measured reality. A recent report in ScienceDirect, "Harnessing the power of AI in qualitative research", explores how AI collaboration with human researchers is still in early stages, especially for context-rich thematic analysis.  In other words, while tools are evolving fast, the experienced insight professional remains essential.

The Surge of AI (and Why So Many Are Holding Back)

AI’s influence across industries is undeniable. But while headlines suggest a tech gold rush, the reality on the ground is slower and more deliberate. According to McKinsey, only about 1% of organisations consider their AI strategies “mature. The risk of over-automation in analysis and storytelling” Most are in testing mode, cautious and curious rather than convinced.

That caution is warranted. In research, the cost of misinterpretation is high. A mistranslation, a mislabelled theme, or a false correlation can have strategic consequences. As such, many insight teams are adopting a second-mover strategy, learning from others’ successes (and failures) before embedding AI into core processes.

What AI Can Do: Efficiency Without the Emotion

Used smartly, AI shines in the heavy-lifting stages of research. It’s particularly effective where repetition and scale are involved. Coding thousands of open-ended survey responses? No problem. Summarising sentiment across multilingual transcripts? AI is fast and reasonably accurate.

Large datasets that once took weeks to clean, structure, and visualise can now be wrangled in hours. Pattern recognition, anomaly detection, and even basic predictive modelling are now accessible at a click. These aren’t just time-savers, they enable researchers to reallocate energy toward deeper insight and stakeholder engagement.

For example, imagine a global brand tracking study with thousands of open comments in six languages. Rather than relying on a week of manual thematic coding, an AI tool can surface dominant topics and sentiments in minutes, freeing the researcher to interpret, challenge, and shape the story behind the numbers.

What AI Can’t Do: The Intangible, Human Stuff

Where AI still stumbles is where human insight thrives: context. It doesn’t understand why a respondent’s sarcasm changes the meaning of a verbatim. It can’t read between the lines of stakeholder politics. It doesn’t know that a sudden spike in data might coincide with a product recall, unless someone tells it.

AI also can’t ask the right questions. It can answer them, sure, but only once we’ve carefully framed the problem. The framing still comes from us: humans who understand nuance, brand voice, business goals, and culture.

It also cannot sense when something feels off. Human researchers often act on instinct as much as evidence, flagging results that look clean but don’t sit right. That intuition, developed through experience, can’t be replicated in code.

The Hidden Risks: When AI Goes Off-Script

It’s not just about what AI can’t do. It’s also about what it can do badly. One of the most widely reported issues is hallucination, AI generating seemingly plausible answers that are entirely false. In qual analysis, that could mean inserting false narratives or themes that never existed in the original data.

There are also serious concerns about bias. If the training data behind your AI tool is skewed, say, under-representing a key demographic, it can reinforce stereotypes or overlook essential insights. For researchers committed to representation, this is a red flag.

Then there’s the trust issue. If stakeholders or participants feel AI is being used irresponsibly, or without transparency, brand credibility and respondent engagement can suffer. Ethical governance isn’t optional; it’s a precondition for continued licence to operate.

The Researcher’s Edge: Why Humans Still Matter

Despite AI’s evolving capabilities, it cannot replace the human capacity for meaning. As an Inside Higher Ed opinion piece argues, AI lacks positionality, the lived experience, values, and perspective that human researchers bring to their work. This absence limits trust in AI-driven qualitative insight.

Human researchers remain indispensable. We don’t just process data, we interpret tone, recognise contradictions, and understand when silence speaks louder than words. We shape the questions, push back when patterns don’t make sense, and connect dots that no algorithm could anticipate. That interpretive lens, the “why” behind the data, is still uniquely human.

A Balanced Way Forward: Human + Machine

So how should research teams approach AI today?

Start by treating it as a co-pilot, a tool to extend your capabilities, not replace them. Let it do what it does best: automate, accelerate, and scale. But keep the core of your work, question design, interpretation, insight delivery, under human control.

That means designing hybrid workflows. Use AI for exploration and trend spotting, but apply human review before drawing conclusions. Educate clients about what’s real, what’s extrapolated, and what’s still unclear. And above all, embed strong data governance: regular audits, ethical standards, and bias checks.

Those who take this balanced approach, integrating AI without surrendering control, will deliver faster, deeper, and more trusted insights.

Final Thoughts: Evolve or Fade

AI is not going to replace the research profession. But researchers who ignore or fear AI may well be replaced by those who embrace it responsibly.

The future belongs to those who can hold both truths at once: that AI is extraordinary, and that human insight is irreplaceable.

Further Reading: Grounded Guidance for Research Leaders

For those navigating the evolving role of AI in market research, several trusted sources offer valuable context and caution.

For a positive view of what responsible adoption looks like in practice, read Successful AI Implementations in Market Research, published by Research World. It showcases how leading agencies are blending human expertise with machine intelligence to deliver better, faster, and more ethical insight.

Russell Turner
Marketing Professional at Formara Print and Marketing