AI in Market Research: Five rules to live by
AI is rapidly transforming industries, improving data analysis and insights through natural language processing and predictive models.

Artificial Intelligence (AI) is a rapidly transforming market, with numerous opinions, changing legislation, and social impact across industries. From automating data analysis with natural language processing (NLP) to deploying predictive models and generative AI, research agencies are adopting AI to uncover insights faster and more efficiently than ever.
The mass explosion has raised some rightly founded concerns and sparked conversation about the ethical implications of this technology. Recently, ESOMAR updated The ICC/ESOMAR International Code on Market, Opinion and Social Research and Data Analytics (the Code) which aims to set the standard for the research and insights community. In a nutshell, the updated Code doubles down on ethics: Duty of care, data minimization, privacy, transparency, bias, synthetic data, and human oversight are front and center.
When speaking to ESOMAR council member, Lucy Davison, she shared why understanding and owning the values of the code are so important: “Trust in the data we collect and analyze, and the insights we provide is paramount to the future of market research. With the new Code, ESOMAR provides the ethical guardrails to ensure that what we do is honest and transparent. As we charge headlong into the AI-driven world, this new code is designed to guide us as human researchers to use AI with humanity.”
This article explores some key parts of the code and the wider conversation around ethical and fair use of AI for researchers and participants.
The AI explosion
What was once a manual labor-intensive process of switching between platforms to copy and paste data and manually coding answers is now aided by a host of AI tools and integrations that can cut research time in half. Global investment in AI is booming. The AI sector is growing at ~40% CAGR, with organizations adopting new ways of working and new platforms coming out every month.
However, with great power comes great responsibility. As research agencies aim to push the pedal to the metal on speed to insight, they face a challenge: Take advantage of AI’s capabilities while safeguarding ethical standards and respondent trust. The stakes are high; public confidence in research depends on it being conducted honestly, objectively, and without infringing on privacy. Missteps not only risk compliance issues but can erode the foundational trust that respondents, clients, and the public place in our industry.
But all is not lost. The research industry is perfectly poised to play fairly in this digital revolution. Ethics, honesty, informed consent, and best practices are so ingrained that most researchers are likely already aligned with the ICC/ESOMAR Code and research with great integrity.
Put people first (duty of care)
Rule #1: People before code. No matter how advanced our AI tools become, our first obligation is to the people behind the data. The Code’s first article (Article 1, p. 11) is “Duty of care” a clear mandate that research must be carried out with due care to avoid harming anyone.
Practical moves:
Respect and protect participants: Ensure no individual is worse off as a direct result of your AI-driven research. For example, if an algorithm flags sensitive personal information, handle it as delicately as a live interviewer would. The Code explicitly says researchers must “not do anything that might harm a data subject or damage the reputation of research.”
Special care for the vulnerable: AI or not, extra precautions are needed when research involves children, the elderly, or other vulnerable individuals. This might mean more human oversight on AI interactions with these groups or even choosing not to use an AI tool if it can’t handle the nuance.
Ethical AI design: Ask yourself at every step, “Is this the ethical choice for participants?” From survey chatbots to AI-based analysis, design your research so that it’s fair, unbiased, and participant-friendly. An AI may be making some decisions, but you are accountable for them. If an AI survey moderator might stress or confuse respondents, tweak it or toss it. In short, treat participants as you’d want to be treated – with honesty, respect, and empathy – whether a human or a bot is interacting with them.
Remember, an AI can crunch data in milliseconds, but, like every step in classical research, it’s your job to ensure those milliseconds of insight never come at the expense of a person’s rights or well-being.
Minimize and protect (data minimization & privacy)
The ICC/ESOMAR Code calls for data minimization, meaning collect and use only what’s relevant for the research. Yes, we know AI loves big data, but sticking to again, classical research ethics aligns perfectly here, so chances are, no changes are needed.
Practical moves:
Collect just enough, not everything: Define clearly what data you truly need to meet the research objectives. If an AI model can do its job with 1,000 data points, don’t feed it 100,000.
Protect personal data like your reputation depends on it (because it does): Once you have that “just enough” data, guard it fiercely. Use encryption, access controls, and all critical IT measures. Generally keeping everything withing your research platform is going to cover this. Use the tools you have integrated into your secure ecosystem and don’t start sending data to a third-party AI tool. It’s basic, but it works.
Delete what you don’t need: Data has a shelf life. Once your research analysis is done and the results are delivered, purge personal identifiers you no longer require to limit exposure.
Leverage privacy-by-design tech: Consider using anonymization where appropriate so the AI doesn’t process personal details if they’re not needed.
In short, less is more when it comes to personal data. By minimizing what you collect and you stay on the right side of the Code and build trust with participants and clients. After all, nothing undermines a cutting-edge research project faster than an old-fashioned data scandal.
Be transparent (no black box surprises)
AI in research shouldn’t be a secret sauce you hide from clients or participants. The Code’s guidance on transparency (Articles 7–8) makes it clear that clients and the public deserve to know what went into the research process – including whether AI or other high-tech wizardry was involved.
Practical moves:
Tell clients when AI is on the case: Did you use an AI to analyze open-ends, find patterns in big data, or generate insights? Let your client know upfront. For example, “We used a machine learning model to segment the survey responses, and our team then reviewed and refined those segments.” By doing so, you’re complying with the Code and you’re likely impressing the client with your modern yet transparent approach.
Mark the lines between human and machine: When you deliver findings, indicate which insights came from AI-driven analysis and where human expertise came into play. The Code even suggests making the “extent of human interpretation” versus AI-generated results explicit. For instance, you might annotate a report with notes like “AI-identified themes, human-verified”.
Be open about synthetic data or virtual respondents: If you used synthetic data to augment your research or tested hypotheses with simulated respondents, say so. For example, “No consumers were harmed (or surveyed) in this experiment. We used synthetic data to model purchasing behavior.” It also ensures no one mistakes synthetic insights for real population feedback.
Honesty with participants (when relevant): While much of the Code’s transparency focus is toward clients and published research, don’t forget the participant experience. If an AI like a chatbot is conducting interviews, consider informing respondents that they’re talking to a virtual researcher.
Think of transparency as your trust amplifier, and a layer of refreshing honesty. It’s informed insights! Rather than fearing that revealing the use of AI will undermine your value, recognize that openness augments your credibility.
Guard against bias (fairness & inclusivity)
Bias is the five-letter dirty word that can quietly creep into AI systems and tarnish your research. And it’s something that researchers are experts in, and experts in mitigating.
We need to ensure our AI-powered research is fair to all groups and inclusive in design. Algorithms might be metal and code, but they can inadvertently carry very human prejudices.
Practical moves:
Train AI on diverse data: An AI model is only as good (or as biased) as the data you feed it. If your training data skews toward one demographic, your AI’s outputs will too. Give your AI a balanced diet of data.
This is particularly important as synthetic data starts to rise from theoretical to practical. Studies show that while current models are representative of the United States, they quickly become less accurate as the cultural distance widens.
Audit for bias regularly: Don’t assume all is well. Test your AI. If you have an AI summarizing survey comments or scoring sentiment, periodically review its results for patterns of bias. Does it consistently misinterpret slang used by a certain community? Does a “neutral” model rate feedback from one group more negatively than another?
Embed inclusivity into research design: This goes beyond the data to your overall method. If you’re using AI to recruit or sample respondents, double-check that the process doesn’t exclude important voices (e.g., only selecting those who are very active online, which might leave out certain age groups) and that the AI is usable for all audiences you are collecting from.
Human review of AI decisions: When an AI model makes a critical decision, like flagging irrelevant responses, have a human double-check those calls, especially early on. A person might notice if all the outliers an AI wants to drop are from one demographic, a red flag that the algorithm doesn’t handle that group well.
Think of yourself as a bias-bulldozer. It’s not enough that your AI is efficient; it also needs to be just.
Keep humans in the loop (oversight & accountability)
AI can automate, accelerate, and augment, but it shouldn’t operate on autopilot. The Code’s 2025 revision puts a fresh emphasis on the “growing need for human oversight”. This means blending AI strength with human judgment at every critical juncture.
Practical moves:
Always have a human pilot: AI might draft a summary of key findings, but a seasoned researcher should review it, contextualize it, and ensure it makes sense in the real world. Humans can catch the “why” and “should we” critical thinking questions that a machine can’t.
Shared responsibility with tech partners: If you’re working with an AI platform or a data analytics partner, check their ethical standards.
Accountability: If an AI tool in your project goes haywire, own it and fix it. Have a plan for when AI outputs conflict with common sense or general research best practices.
Continuous human monitoring: Integration of AI isn’t a set-and-forget deal. Establish checkpoints: maybe a weekly review of AI-generated survey invites for any weirdness, or a senior analyst sign-off on any AI-derived insight before it goes to the client. These are the kinds of human safeguards that turn AI from a risky black box into a reliable co-pilot.
The future of research is AI+HI (Human Intelligence) working together. By maintaining active human oversight, you ensure that AI remains a powerful tool in your kit.
Conclusion: Trust, tech, and the future of isight
AI in market research offers incredible opportunities. Embracing AI ethically means we can all enjoy these benefits while strengthening trust with participants, clients, and the public. The 2025 ICC/ESOMAR Code isn’t a buzzkill to your innovation; think of it as a compass ensuring your shiny new AI toy is pointed due North (ethically speaking).
These five rules go hand in hand with best practice in research, just with an added artificial consideration. These aren’t heavy lifts, especially when you’re likely doing them already, but they make a world of difference in how your research is perceived and in the real impact it has.
By proactively applying these principles, agencies and researchers show clients and participants that you’re not just riding the AI wave; you’re steering it responsibly. That builds credibility and goodwill, which are the bedrock of long-term success in our industry. So, go forth, innovate, and try to see “ethically” as a real word again.