Navigating legal and ethical challenges in the AI era

20 March

This article explores the legal and ethical impacts of AI in the market research and insights industry from the perspectives of its various stakeholders.

4 min read
navigating legal and ethical challenges in the ai era

As industries enter the AI era, many of the legal and ethical implications cannot yet be overseen. Earlier this year, an AI Act was adopted by the European Parliament, whilst UNESCO and the European Commission signed an agreement to accelerate the global implementation of the UNESCO Recommendation on the ethics of artificial intelligence. In this excerpt, Robert Heeg asked JC Escalante, Global Head of Generative AI (GenAI) at Ipsos, how measures are being taken by legislative bodies to control and ensure the proper usage of AI for research. Here, we provide an edited extract. Nevertheless, you can find the whole article in ESOMAR’s Global Market Research 2023 report. 

Protecting the public from harm

The capacity for technological innovation in the Generative AI area creates several challenges for governments, citizens, the private sector, and society. A balance needs to be struck between AI’s enormous promise and risks through legislative frameworks.

The Insights Industry is not isolated from this tension, explains Escalante: “On the one hand, several regulation drafts, like the EU Artificial Intelligence Act, leave compliance with a future law relying mostly on self-assessment. However, the recent rush to assess, experiment, and deploy large models at scale caught many companies and industry players by surprise, with weak or non-existent responsible AI practices, frameworks, or tools.”

Escalante points to the very nature of legislative processes, designed with very long feedback loops to create spaces for debate and analysis. “This underlines the importance for industry players of all sizes to protect the public from harm.

They should adopt ethical practices and remain vigilant about protecting privacy and establishing domain-specific frameworks to address accountability, bias mitigation, and safety measures”.

Identifying challenges, spotting risks

From a broader perspective, Escalante observes that AI has been powering a narrow set of use cases within the self-serve research space for years. The new wave of GenAI large models is blurring the boundaries of what is possible within self-serve research and the potential risks. “One challenge in this space is the need for more understanding of the emerging behaviours and capabilities of the new GenAI models. Detecting if biased heuristics influence the outputs is complicated. An overlapping risk is the generation of information by the models not grounded in reality or factual data, the so-called hallucinations.”

One of the first steps to manage these risks, advises Escalante, is implementing frameworks to evaluate AI tools and models. At Ipsos, they evaluate AI tools using the criteria of Truth, Beauty, and Justice.

  1. Truth: This domain focuses on the accuracy of the models and their outputs, examining their quality and avoiding hallucinations or false fabrications.

  2. Beauty: The most important aspect of beauty in AI focuses on the explainability of its output. Some use cases also include a model’s ability to surprise and generate new insights.

  3. Justice: This domain encompasses multiple important areas: AI ethics, algorithmic fairness, data security, and privacy, alongside the rights and responsibilities of data creators used for training and users of the models.

Managing consequences: the discussion continues

Escalante is convinced that AI will continue transforming the Insights Industry, creating new opportunities and raising further questions about security, privacy, safety, and interpretability.

“Most legislative efforts today centre around preventing first-order consequences arising directly from the development or application of AI, like mishandling of data, algorithmic discrimination, copyright issues, etc.”

As the industry progresses to assess, trial, and adopt AI-powered solutions, especially with Generative AI, he feels that more discussion spaces will be needed to identify and manage second-order consequences. “For example, the environmental impact of AI and the actions required to offset the additional estimated carbon footprint created by these technologies.”

Curious to know what else the industry experts have shared about AI? Download our Global Market Research 2023 report.