Beware AI’s Siren Song

How leaders can secure truly differentiating insight in the AI era

11 min read

AI is a powerful ally—fast and flexible. At a minimum it offers augmentation - 47% of researchers worldwide use AI regularly in their market research activities - but increasingly the focus is on the extent to which it can automate what was historically the domain of flesh and blood professionals. Is the $140 billion insight industry on the cusp of a fundamental shift from labour to software as some predict? Do humans in the insight process become a quaint anachronism? Have we reached ‘peak human’ in insight?

Senior leaders often feel exasperated with their insight functions. They’re seen as the ones who slow things down—questioning, qualifying, complicating. And who hasn’t grown weary of the perennial debate about where insight should sit in the org chart? Or of trying to put a hard ROI on what often feels like a “nice-to-have” capability? If senior leaders debate whether the insight function really earns its keep — in an era of relentless efficiency drives, the attractiveness of ‘another way’ … faster, smarter,  and cheaper, is obvious, and in the case of AI, the promise sounds especially seductive. Why keep messy, expensive humans in the loop when you can replace them with models that are “good enough”?

A recent article from Andreessen Horowitz (Faster, Smarter, Cheaper: AI Is Reinventing Market Research), the Silicon Valley venture capital firm and champion of unrestricted technological progress, suggests that the demise of the insights industry in its current form is getting uncomfortably close. They claim that “we’re seeing a crop of AI research companies replace the expensive human survey and analysis process entirely” and in this ideal world they argue … even the respondents are simulated. And so much the better they say. “The companies that adopt AI-powered research tools early will gain faster insights, make better decisions, and unlock a new competitive edge”.

“Crucially, success doesn’t mean achieving 100% accuracy. It’s about hitting a threshold that’s “good enough” for your use case. Many CMOs we’ve spoken with are comfortable with outputs that are at least 70% as accurate as those from traditional consulting firms, especially since the data is cheaper, faster, and updated in real time” Andreessen Horowitz

We are witnessing what might be called the "Maximal" approach to AI in research—a maximalist faith in automation and synthetic agents as the future of insight. This view treats the presence of humans in the process as a source of inefficiency or friction. The solution? Replace them. If AI can deliver 70% of the quality of human-led research, but at a fraction of the time and cost, then “good enough,” they argue, is now good enough.

This narrative has gained traction far beyond Andreessen Horowitz. Influential voices from Alphabet, Meta, OpenAI, and a host of well-funded AI research platforms speak confidently of agent-based market simulations, zero-touch consumer testing, and generative strategy. But the real risk is not bad AI, it is good enough AI—systems that produce persuasive results that go unchallenged because they sound plausible and arrive quickly.

The insight industry is once again at an inflection point. The rise of generative AI and agent-based simulations promises a future of always-on, infinitely scalable, and hyper-efficient research. Andreessen Horowitz and other tech optimists frame this as a seismic shift: market research liberated from its traditional bottlenecks and biases, finally reinvented for the software age. Their thesis is clear: faster, smarter, cheaper. But as with all siren songs, we must be wary of what we are being lulled into.

In a world chasing scale and speed, senior leaders must ask: when is AI truly adding value? And when is it simply masking a loss of rigour? And this is where strategic thinking is critical. It’s easy to be swept away by the logic. But “good enough” is not good strategy. Plausibility is not the same as truth. And speed does not guarantee value.

Insight is not just a throughput function. It is an interpretive, critical, creative process. Over-automating it risks replacing deep thinking with shallow mimicry—fast answers that go unchallenged because they sound convincing.

If you get your strategy wrong, you risk ending up with a customer insight function that merely mirrors what everyone else is doing. It may be fast and cheap, but it won’t be distinctive. It won’t give you the competitive edge you need. In a landscape where AI tools are available to everyone, differentiation doesn’t come from access—it comes from depth. Insight must now go beyond data; it must tap into emotion, purpose, contradiction, and human complexity.

So this is the time not to scale down your investment in insight, but to double down. Visionary leaders will recognise that in this era of automation, the organisations that win will be those that stay truly connected to their customers—and who invest in the professionals who know how to decode the human condition.

Insight doesn’t emerge fully formed from a dashboard. It is forged in the messy, difficult process of wrestling with ambiguity—of asking hard questions, reframing problems, and connecting dots that don’t obviously align. It’s born in the heat of strategic tension and creative friction. That’s why world-class insight functions require space. They need time to explore, argue, and reflect. And that space must be protected from the ever-present pressure to deliver quick wins.

AI can support this process—it can sort, summarise, simulate. But, as researchers Ehsan and Riedl note in their work on human-centred explainable AI (Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach), models cannot grasp values, context or nuance unless guided by human judgement. AI can’t engage in this kind of meta-awareness or self-critical reflection. It doesn’t know it’s reasoning.

Scholars like Dirk Lindebaum (ChatGPT, reflexivity and the decline of scientific responsibility) warn that excessive reliance on AI can erode human reflexivity—the critical self-awareness and contextual judgment that underpins responsible interpretation. As he argues in his critique of ChatGPT, outsourcing reflection and meaning-making to machines undermines the essence of scientific and managerial responsibility. In insight work, this danger is particularly acute: when we stop reflecting, we stop truly understanding.

So, AI is not just changing the tools we use. It’s reshaping the very way we think. One major concern is the erosion of cognitive resilience. The more we offload our thinking, the less skilled we become at navigating complexity ourselves. This is not a distant problem—it’s here now. AI also tends to sanitise creative processes—removing the ambiguity, friction, and discomfort that often lead to real innovation.

AI is not the enemy. It is inevitable. But if we don’t challenge the premise that it can, or should, replace humans in the insight process, we may find ourselves with faster answers that are less true, and scalable systems that scale mediocrity.

Clark and Chalmers (Extending Minds with Generative AI), with their influential extended mind theory proposed that the mind isn’t confined to the brain. Instead, it extends into the world through tools, language, gestures, and now—AI. Your tools become part of your thinking system. In his recent essay, Clark argues that generative AI isn’t eroding our cognition; it’s reshaping and expanding it. New tools eventually become embedded in human cognition, altering how we think without necessarily degrading our capacity. AI is a co-thinker. It enhances our ability to brainstorm, analyse, and reflect. The challenge, then, is not whether we use AI, but how we integrate it into our cognitive workflows.

So what is the role of humans in this new landscape? It is not to compete with AI on speed or scale. We have already lost that battle. Our advantage lies elsewhere—in the distinctly human capabilities that AI cannot yet replicate. Companies can reframe AI as a cognitive amplifier, not a surrogate. That means training people to collaborate with AI—question it, edit it, challenge it—not just accept what it says. As ever success will result from finding the right equilibrium.

So the senior leader’s role is to actively champion the insight function—not only in terms of resources, but in creating the cultural space where complexity is welcomed, reflection is valued, and creativity is not just permitted, but expected. To lead insight in the AI era, you must reframe the very process of insight generation. That means redefining the core skills and capabilities needed for success.

We suggest focusing on three essential human contributions:

  • Seeing the Big Picture

  • Generating Distinctive Insight

  • Driving Action and Influence

Each depends on power skills that AI cannot replicate.


You want professionals who can ask questions AI doesn’t know to ask. Who can make sense of subtle context, contradictions and patterns. Who can reframe problems with metaphor, narrative and moral perspective. Who can move people—not just analyse them.

That means investing in the following:

  • AI Interface Intelligence – knowing when to trust, when to override

  • Sensemaking – connecting disparate dots in complex systems

  • Critical Thinking – challenging assumptions, spotting flaws

  • Creative Leaps – generating bold, original insight

  • Future Thinking – imagining possibilities before they emerge

  • Narrative Craft – telling stories that move people to act

  • Collaborative Leadership – bringing out the best in hybrid teams

These are not just “soft” skills—they are the sharp edge of competitive advantage.

As a senior leader, your role is to champion them—through training, team design, and alignment of your insight and AI strategies.

If your insight team is still operating within frameworks built for a pre-AI world, it’s time to update the blueprint. To build a truly hybrid insight function, you need new models—frameworks that clearly spell out the division of labour between machine intelligence and human expertise.

Take sensemaking, for example. Insight professionals must see themselves not just as data interpreters, but as the wide angled lens of the organisation—capable of noticing the nuances that AI can’t.

That requires tools and methods designed to support iterative, interdisciplinary thinking—frameworks that explicitly show where AI supports pattern recognition and where human judgement must intervene.

Developing these frameworks won’t happen by accident. It needs active sponsorship and resourcing from the top. For senior leaders, this is not just a philosophical debate. It’s a practical concern. The future of your insight function—and your organisation—depends on maintaining the human spark. Stay close to these trends. Make it your business to understand how AI is reshaping cognition, creativity and strategic thought.

Ultimately, this is not just about insight. It’s about responsibility. As AI becomes more pervasive, the temptation is to hand over more of the interpretive process—to let the machine decide not just what we know, but how we know it. That would be a profound mistake. Insight work has ethical weight. It shapes decisions, perceptions, and lives. If we outsource it too far, we risk losing accountability. Worse still, we risk trusting systems that cannot reflect, cannot empathise, and cannot take responsibility.

As Joseph Weizenbaum warned decades ago: just because a machine can do something doesn’t mean it should.

So the leadership challenge is clear: resist the seduction of ‘faster and cheaper’ for its own sake. Preserve the qualities that make insight matter—empathy, ethics, creativity, reflection.

The future of customer insight is not fully automated. It is deeply human, amplified by AI, but never defined by it. The best companies won’t be those who automate the most. They will be those who pause at the right moment, ask what’s really going on, and notice what’s missing. Insight happens in those moments. It can’t be rushed. It can’t be scaled mindlessly. And it certainly can’t be replicated by a model trained only on what has already been said.

The future belongs to those who know when to step back, reflect, and say: this is where the human mind still leads.

Adam Riley
Founding Director at Decision Architects
David Smith
Director at DVL Smith Ltd