It is no news that the insights industry has been undergoing significant changes in the last few years. Significant technological advancements have ushered in new avenues for revenue, optimised existing processes and democratised data and tools. In fact, several companies have diversified their offerings, resulting in blurred boundaries between market research, research software and reporting sectors. For instance, the latest edition of ESOMAR’s Global Market Research reported that 29% of the market research sector’s US$ 54 billion turnover is derived from research software activities, while 6% of the turnover is from reporting activities.
This trend was certainly observed in the recently completed Data and Analytics Conference organised by the Data and Insights Network. Attendees saw several market research companies discussing data and analytics techniques that optimise and explore diverse revenue streams.
Data privacy and representation
One intriguing session was led by Gilian Ponte, who highlighted that there’s no one-size-fits-all solution to privacy protection. His solution is a robust framework for privacy protection called “Differential Privacy”. The first step is to quantify privacy risk. I was initially sceptical of the idea, but then he explained it further using graphs and math (that should convince people!). He suggested viewing privacy risk as a dial with varying degrees rather than an on/off switch—a potentially useful framework given the recent scrutiny around targeting. As the Global Market Research 2024 report noted, the loss of cookies has created a “signal loss” for brands and agencies, which must now find new ways to understand customers without compromising privacy. First-party data and Differential Privacy may offer promising alternatives.
As privacy concerns rise, researchers increasingly explore synthetic data—data generated artificially—as an alternative to directly sourcing consumer data. This approach has become especially relevant since obtaining representative data can be both costly and time-consuming. Synthetic data has been considered to amplify the voices of those hard-to-reach target consumers, and as ESOMAR’s Claudio Gennaro and Paula Fernandez noted in the latest edition of Global Research Software, regulators view its privacy-enhancing potential as a major advantage. When Kirby Johnson from Bulbshare shared the various use cases of synthetic data, she stressed the advantage of interacting with such hard-to-reach consumer personas like children and old people, all the while exercising caution. As she and experts in the Global Research Software 2024 noted, synthetic data is only as smart as the input it receives. It will be interesting to see synthetic data develop, especially since the regulations around this evolving field are quite nascent.
AI as a research ally (terms and conditions apply!)
No insights industry event these days is complete without discussing AI. I had the chance to attend some sessions that presented some interesting applications of AI. A key takeaway from all these sessions was AI is a friend of researchers as it helps them save time and money – think summarising large amounts of text and being inclusive through language and medium. However, this new friend needs a bit of human oversight, given how AI sometimes has the tendency to be biased and hallucinate. This resonates with the guidance material ESOMAR published earlier this year in 20 questions to help buyers of AI-based services.
Navigating the evolving AI-regulations landscape
With regulations around AI slowly being developed worldwide, a concern on everyone’s mind is whether regulation will stifle innovation or aid it. ESOMAR’s very own Jules van Vlokhoven took a masterclass at this conference to offer some clarity on the prevailing regulatory landscape worldwide and what their approach looks to achieve. In fact, ESOMAR has been recently invited to participate in a working group shaping the European Union’s Code of Practice on General-purpose AI. In addition to that, ESOMAR has also been working on updating its very own Code of Conduct, which was recently sent for public consultation.
Despite the cloudy Dutch weather, the conference was a bright spot for ideas and collaboration. It was a reminder that data professionals can foster trust and clarity with innovative visualisation, proactive data security, privacy-conscious analytics, and responsible AI. I’m looking forward to the next gathering of insights professionals!