How Can Five Fundamental Principles Guide Your Research?

31 March

The AI revolution threatens jobs, workflows, and research structure. Concerns about online panel quality fuel debate about their future. Research needs a guiding principle, and the ICC/Esomar Code is that North Star.

6 min read

Research and insights are in a state of flux. The AI revolution is threatening to upend jobs, options, workflows, and even the structure of the research ecosystem. At the same time, concerns about online panel quality have created an existential debate about their future. Research needs a North Star to help navigate, and I believe the ICC/Esomar Code is that North Star.

Why do we have codes of conduct for research and insights?

Let's start by considering the "why." I think there are four key reasons:

  1. To enable researchers to understand the difference between good and bad practice.

  2. To enable buyers of research services to distinguish between valid approaches and snake oil.

  3. To reassure research participants that their rights and privacy will be respected.

  4. To elevate the standards of research, increasing its value to society in general and to decision-makers in particular.

The ICC/Esomar Code

In my opinion, the best international guideline for good research is the ICC/Esomar Code on Market, Opinion and Social Research and Data Analytics. The two key features of this Code are:

  1. It is a collaboration between businesses and researchers. Esomar operates in association with more than 130 countries. The ICC is the world's largest association of companies, with over 45 million organisations as members.

  2. It is a principles-based code, as opposed to a rules-based code. This means it can readily be applied to new developments, such as AI.

This is why the Code is recognised by more than 60 associations in more than 50 countries.

Five principles to guide your research

You can access the full ICC/Esomar Code by clicking here. It contains plenty of detail, but I want to focus on its five Fundamental Principles. Below I show how these principles apply to everyday research issues and to the cutting edge of AI-enabled research.

1. All research must be legal, honest, transparent and truthful

The legal and honest parts go without saying (but they nevertheless need to be said). However, it is transparency and truthfulness that provide active, day-to-day guidance. At the most mundane level, it means you can't tell research participants that the survey will take 10 minutes if it will actually take 20 minutes. You can't tell a vendor that you will pay them in 30 days if you are going to pay them in 180 days. And you can't refuse to tell a client how you conducted your analysis.

In terms of AI, it means telling clients that you have used AI, telling participants whether they are talking to a person or a bot, and transparently telling data users the confidence they can place in the outputs of synthetic data. We can get into more detail about AI, but transparency takes care of a large proportion of the issues.

2. All research must be conducted with due care. Interactions must be fair, respectful and avoid harming the data subject.

In terms of AI, this means guarding against data being inadvertently shared, checking that hallucinations are not occurring, and keeping a human in the loop. Machines can't take due care: that is a human responsibility.

3. Researchers must clearly communicate to data subjects how their personal data will be collected and used. All personal data must be fully protected against unauthorised access or use.

This is a longstanding principle of market and social research. AI can help us improve the anonymisation of people's data, but it could also inadvertently reveal personal information. Any AI system needs to be validated to ensure privacy. This principle also raises the issue of digital twins and personas. If we are going to create digital twins from the data, this should be communicated to the people who provide it.

4. Researchers must behave ethically and not do anything that may undermine the public's trust and confidence in research or damage its reputation.

This is the catchall clause: researchers must not undermine trust in research. It covers cheating and fraud, as well as carelessness and recklessness. Buying a sample that is too cheap and not checking it would fall foul of this principle. In terms of AI, using an unproven technique without ensuring the buyer or user is aware of the risks would be against this principle.

5. Researchers have the overall responsibility and oversight for the research they undertake, irrespective of the method, technique and technology applied. Those who contribute to the research have a degree of responsibility commensurate with their activities, expertise and control.

The responsibility for what a researcher produces lies with them. We can't assume the algorithm is OK, that the data is OK, or that the AI has not hallucinated. We, the researchers, have to own the product we produce and give to users and decision-makers. This may change one day, but not this year, and not any time soon.

The bottom line

Legislators and regulators are turning their attention to everything AI at the moment; they are also focusing on privacy and the avoidance of spam and AI-slop. If we want research to be governed by researchers, we need self-regulation that can be demonstrated to work. If you follow the ICC/Esomar Code, you won't go wrong.

As Warren Buffett said, "It takes 20 years to build a reputation and five minutes to ruin it." That new AI idea might easily be the thing that damages your reputation if you don't tackle it the right way.

Ray Poynter
President at Esomar