Opportunities, challenges and threats that AI presents
Our discussion revolves around the opportunities, advancements, challenges and threats that AI presents to the market research industry and how it impacts the efficiency and efficacy of insights, both positively and negatively.
Article series
Insight250
- The importance of business sense in research
- The role of humour in effective leadership
- The importance of ethics
- The importance of disruption in innovation and leadership
- The importance of Disruption in Innovation and Leadership Part 2
- The importance of Diversity & Inclusion
- The impact of colour
- Communicating insight with impact
- Insights on leadership, culture and polling
- The evolution of electric vehicles
- 2022 Top tips (part 1)
- 2022 Top tips (part 2)
- Maximising the potential of data
- The importance of flexible working
- Winners
- The importance of advanced analytics
- Judges for the 2022 Insight250 Awards announced
- The evolution from social listening to digital intelligence
- The Judges' Perspective
- The judge's perspective - part 2
- Insight Climate Collective
- Insights technology
- Understanding employee ownership
- Global insight perspectives
- Top Tips from our Leaders and Innovators
- The Evolution of Insights in the Food & Beverage Market
- The Evolution of Insights in CPG
- Neural Mechanisms Behind Consumer Decision-Making
- Celebrating and Elevating the Insights Industry
- The State of the Insights Industry
- Opportunities, challenges and threats that AI presents
- 2024 Insight250 Winners Announcement
- Insight from the Insight250: Connecting Brands and Consumers Through Insights
- Insight from the Insight250: The Importance of Human Insight and Attention
The Insight250 spotlights and celebrates 250 of the world’s premier leaders and innovators in market research, consumer insights, and data-driven marketing. The awards have created renewed excitement across the industry whilst strengthening the connectivity of the market research community. Winners of the 2024 Insight250 will be announced at ESOMAR Congress with the full list revealed on 30th September - previous winners can be seen at Insight250.com.
Sign-up now for the official announcement at: https://event.on24.com/wcc/r/4695262/131D68960E7697DB7E5C538E8DF644EE
Or by scanning the QR code
With so many exceptional professionals named to the Insight250, it seems fitting to tap into their expertise and unique perspectives across various topics. This regular series does just that: inquiring about the expert perspectives of many of these individuals in a series of short topical features.
This edition is a special one for the industry. We sat down with three association titans, Melanie Courtright, Chief Executive Officer, Insights Association, Debrah Harding, Managing Director, Market Research Society and Judith Passingham, Chair of ESOMAR’s Professional Standards Committee. Our discussion revolves around the opportunities, advancements, challenges and threats that AI is presenting to the market research industry and how it is impacting the efficiency and efficacy of insights, both positively and negatively. I hope you find this discussion as insightful as I did – enjoy! A huge thank you Melanie, Debrah and Judith for sharing your expertise and insight on such a vital topic.
Crispin: In what ways is the research sector already benefiting from AI?
Melanie (IA): “Insights is already seeing success in the areas of data analytics and processing, automation, enhancing research accuracy and quality, generating content, and improving creativity in research design. These areas are impacted through increased speed, output, and innovation that allows researchers to spend their time more strategically and effectively.”
Debrah (MRS): “Researchers have been using machine learning for many years and, though generative AI technologies are still relatively new, the sector is fast adopting them to expedite what were once laborious parts of their day-to-day work. Large language models, for example, are helping researchers to interact with and understand more about personas identified through segmentation research as if they’re real individuals – a prime example is Ipsos’ PersonaBot, launched earlier this summer, which allows the simulation of focus groups based on segmentation data. As well as boosting productivity, these technologies are freeing up researchers’ time for more of the high-value, creative work they do to deliver the best insights for organisations.”
Judith (ESOMAR): “AI is starting to have an impact on some of the long-standing operational challenges experienced within our sector. For an industry so dependent on data, it is sometimes surprising how manual the operational processes can be, although this isn’t much discussed. For example, AI is starting to enable faster processing of certain kinds of data, for example coding of open-ended responses has posed accuracy challenges to operational researchers. Some of the new NLP techniques emerging are very exciting and have the potential to help researchers extract better value, faster and improve accuracy.”
Crispin: What are you most excited by in terms of the support future developments in AI could bring to the sector?
Judith (ESOMAR): “AI has the potential to impact almost every area of the research sector. If you consider all the discrete aspects of conventional primary research from pre survey/design, sample sources, ‘in field’, post field analysis and report and deliverables it is possible to imagine thousands of different advances that could be made through the application of AI based techniques in each one. If we consider the combination of primary and secondary research the potential is greater still. I wouldn’t highlight any specific area. AI is potentially all encompassing and the question here is perhaps what we trust it to do and how quickly, and what evidence do we need to push forward?
“I would add one additional remark here, because of the skillsets we have within our sector, I believe there are very real opportunities for the research industry to leverage those skillsets beyond the sector; ‘prompting’ could be one example where research skills may have resonance, ‘representation’ is another; thinking through the many issues in ensuring that training datasets are representative and comprehensive.”
Debrah (MRS): “The speed that AI could enable researchers to collate and analyse data in the future is exciting in itself. AI has much more to offer, though, and I’m most optimistic about potential improvements in our ability to better represent populations. The sector is on a continuous journey to accurately reflect the views and realities of everyone, no matter their background. If we can harness it in the right way, AI could help us to reach and represent groups which have been historically underrepresented through traditional techniques.”
Melanie (IA): “I am particularly excited about what AI can do for data quality. AI’s greatest promise is in its ability to scale and handle vast amounts of data, but to fully realise that potential, the foundational data inputs must be of the highest quality. Using AI to reduce fraud, improve accuracy, measure bias, and boost engagement in research among participants would enable the creation of much healthier data that can underpin AI advancements.”
Crispin: What’s the greatest risk which AI poses to research and how can the sector mitigate this?
Debrah (MRS): “While there is huge potential for AI to deliver greater representation, one of the greatest risks it poses is, in fact, doing the opposite. AI models are trained by people, based largely on content written by humans, and the biases these individuals hold can easily flow through into the technology and its outputs. Across the sector, we need to work consciously and proactively to mitigate these human biases and ensure transparency and explainability in the methodology and tools we use – to maintain both the quality of our work and clients’ confidence in the technology itself. This was one of the key motivations behind MRS’ ethics guidance to help the sector to deliver fair design, use and outputs of AI.”
Melanie (IA): “The two greatest risks are using AI on data that is not fit for purpose, which can amplify existing biases and weaknesses, and failing to base AI data and insights in primary data from real participants that reflects their values, needs, and expectations in real time. To mitigate these risks, the sector must invest in creating a healthy data framework, continue to value primary data from people, and maintain a strong focus on ethical data collection practices.”
Judith (ESOMAR): “I think there is a big risk in delegating some tasks to AI technologies without understanding what the AI is doing sufficiently well and/or within the prerequisite human oversight. This is particularly a risk where we are as an industry at the current stage of experimenting to address the question of what works and what doesn’t. The risk of sending out AI generated questionnaires without human oversight or sign off as one example could result in significant problems amongst research participants or even breaches of data privacy regulation, over claiming for the accuracy of ‘synthetically derived’ data could result in poor or highly inaccurate advice being given to research users, with all the consequences of that.
“As indicated in ESOMAR’s 20 Questions to help Buyers of AI-Based Services for Market Research and Insights, ( https://esomar.org/20-questions-to-help-buyers-of-ai-based-services ) providers need to ensure that clients have a good understanding of how an AI service can be applied in research, the problems it can address and any limitations of an AI model to help them assess the validity of the results and conclusions drawn. It is also important to ensure that services have been designed with a duty of care to humans in mind, to avoid any potential negative consequences.
“Finally, the issue of AI created fraud is something that we need to be concerned about. The work being done by industry associations within the GDQ is an important part of the effort to combat this.”
Crispin: Do you think regulation is needed around the use and development of AI? What form do you think this should take?
Melanie (IA): “Yes, regulation is necessary both to enable innovation and growth, and to ensure that these advancements benefit society rather than cause harm. Data buyers want assurance that the data was collected and applied ethically, fostering confidence in their investment decision, which is usually grounded in adhering to codes and regulations. Those regulations also ensure no harm comes to society, companies, or data providers. When crafted with care, codes and regulations protect and spur adoption.”
Judith (ESOMAR): “I think this is firstly a question of following closely the legislation that is emerging, ensuring that legislators understand the specificities of our sector so that there are no unintended negative consequences of new legislation. I think it’s also an issue of ensuring that legislators and civic decision makers are well placed to understand what citizens are thinking about AI, understanding their hopes and fears so that decision makers are well placed to make the right decisions. Our skill, and objectivity as an industry in representing citizens perspectives is an important aspect of this discussion.
“Closer to home, we are revising the ICC/ESOMAR International ESOMAR global code of conduct to take account of AI and some other important developments following the very recent update of the ICC’s Advertising and Marketing Communication codes. The ICC’s portfolio of codes is used by thousands of companies and organisations on a global basis and our industry participation in this is important. This is a critical aspect of our self-regulation as a sector industry so that we can demonstrate to regulators, members of the public on whose co-operation we rely, and to ourselves that we are behaving in a proactive and responsible way in all aspects of our conduct, not just within the sphere of AI.”
Debrah (MRS): “Given the power of AI, and the justifiable skepticism from many organisations around its outputs, I believe regulation through legislation is needed. Legal changes should also be supported and complemented by self-regulatory mechanisms, such as the MRS Code of Conduct and the MRS AI guidance. This isn’t about red tape or processes for processes’ sake – it’s a case of upholding the quality, accountability and transparency of our work. This is particularly important with this technology, given most practitioners have to rely on third parties for the development of AI models and data.
“With the pace of development in this field, however, any regulatory framework will need to be adaptable and responsive to further advancements. This is where self-regulation can be most effective. Our approach should also align with international rules, or at least help UK businesses navigate regulations from abroad, given technology isn’t confined to country borders.
“Ultimately, there’s nothing to be lost in implementing change in a measured and managed way – there’s a lot to be gained, ensuring we’re making the best use of the technology while maintaining confidence and trust in our sector’s methods and insights.”
Crispin: What role and responsibilities do you think associations have in helping the insights industry adopt AI in a responsible manner?
Judith (ESOMAR): “This question depends on what you think associations are there to do. Someone once said to me that associations are there to do things that individual organisations in our industry cannot do on their own and I think that is a reasonable starting point. We are here of course to represent our sector to legislators, and this is critical around the topic of AI, where we have not only to understand the shape of emerging legislation but to ensure our members understand the implications of legislation in the interests of responsible adoption – and particularly where interaction with members of the public is concerned.
“Associations can help the industry understand what’s happening in this area, and showcase learnings and best practice and highlight responsibilities to clients, data subjects and the general public. ESOMAR has set up an AI task force to help with this. We are keen for as many practitioners as possible to be involved in these discussions and workshops (https://esomar.org/guidance/navigating-the-future-of-insights). It is then incumbent on the Professional Standards teams within Associations to develop clear perspectives on what works and what doesn’t. This will be an important challenge, but it will be critical.”
Debrah (MRS): “Part of our role is in providing the guidance and frameworks on ethical, transparent practices to preserve the quality and integrity of the sector’s work, regardless of what formal regulation or legislation may be put in place by governments. Our duty as associations is also to inform and persuade regulators to develop legislation which is protective, and enables innovation at the same time.
“That is not to say, however, that we should be overly reticent or fearful about encouraging the growing use of AI within research. It has huge potential to transform and improve the insights we uncover and we should continue to be ambitious in adopting this new technology. As associations we can help to highlight examples of best practice, allowing practitioners to learn from each other and encouraging sector-wide, responsible progress. The MRS Delphi Group’s strategic BEST Framework is an example of how associations can help responsible progress. This provides practitioners with conceptual guidance on the way Generative AI can be applied, and highlights what good looks like, alongside the pitfalls and risks.”
Melanie (IA): “Associations play a critical role in raising awareness, educating on trends, facilitating knowledge sharing, identifying and mitigating risks, and bridging the gap between the profession and government. Associations serve as champions of innovation, advocating for transparency, ethical guidelines, and reasonable governance.”
Crispin: I recently debated the pros and cons of synthetic data with Simon Chadwick, Finn Raben and Mike Stevens (Ed: see the full debate here: https://researchworld.com/innovations/synthetic-data-get-on-board-but-do-it-wisely and a summary published in Greenbook here: https://hubs.ly/Q02NpSQK0). How do you feel the sector should consider the issue of synthetic data? What do you see as the positives and the negatives?
Debrah (MRS): “As with other elements of AI, there is great potential in synthetic data. Large language models’ ability to generate data or simulate respondents can help to plug the gaps of underrepresented groups, circumvent the issue of survey fatigue and allow practitioners to delve into more depth on the views of a given demographic without undertaking resource-intensive in-field research.
“However, synthetic data needs to be handled with care and precision – without proactive measures to ensure accurate representation, for example, synthetic data can tend towards the middle, compressing diverse groups and amplifying the views of the average respondent, while removing outliers.
“Researchers must also ensure that they understand, track and communicate the source of information – whether it’s real or synthetic – to protect their own data lakes and the transparency and quality of future insights.”
Melanie (IA): “First, I want to set the context that synthetic data from my perspective is the creation of whole data sets, or whole segments of a data set. It’s different from imputation and inference in that there is no primary data collection specific to the same project or survey that serves as the majority of the data.
“The allure exists in the potential to create such data and remove the timing, cost, and quality issues with primary data. However, concerns about whether the data can be both accurate and precise, at the speed of society, and whether the underlying data is of sufficient quality to not amplify biases, or if bias can even be understood.”
Judith (ESOMAR): “The applications of synthetic data are still being discussed and debated and I think we can all see the ‘prize’. Firstly, we need to arrive at a common industry perspective of what synthetic data is; is data weighting, for example, synthetic data or not, and if not why and how should we differentiate some of the things we have been doing for years vs the new and emerging approaches. We then need to come onto the issue of what works, how, and what doesn’t work. What are the issues around new vs legacy data feeds. What are the impacts of legacy data structures and attribution and so on. I would like to see more emphasis on published evidence and proofs. And so, I think we are not at the point to answer this question.”
Crispin: What lessons from previous technological evolutions need to be considered when adopting AI and synthetic data?
Melanie (IA): “Previous innovations in our sector have been rooted in testing for consistency, replicability, and accuracy. The adoption of AI and synthetic data must be held to the same standard. Conducting parallel tests over time to ensure consistent and replicable results that provide accurate data is critical, and when discrepancies arise, it’s vital to understand why. We can also learn from history to avoid making the same mistakes as it relates to how we design questions, engage participants, and measure bias and quality.“
Judith (ESOMAR): “We shouldn’t assume that everything new and apparently cost efficient is better but spend the necessary time to understand the industry trade-offs involved, including the quality of the data used and the validity of conclusions in a logical and fact-based manner.”
Debrah (MRS): “We must avoid treating AI and synthetic data as a panacea for all of the sector’s challenges – or our newest solution may end up becoming our newest problem. The risk is that, in approaching this technology purely as a time or cost-saving device, quality controls fall by the wayside as researchers race to the bottom to meet clients’ expectations for faster, cheaper outputs. To make the most of AI and synthetic data and continue to produce trusted, effective insights, we need to be firm in investing the time and money needed to maintain the rigour, quality and integrity of our work.”
Crispin: Can you give your top tip on how to be a leader/innovator in our profession?
Judith (ESOMAR): “I think there are many characteristics required to be an effective leader in our profession which is challenging in so many ways. The one thing I would highlight is the importance of personal integrity. What we do as an industry relies on trust, in what we do, how we do it. All the leaders that I admire and have admired in our industry have that quality.”
Debrah (MRS): “My advice to any leader or innovator is to embrace failure. I strongly believe you learn more from failing than you do from succeeding – and every failure, big or small, will ultimately help you to improve and succeed in the future.”
Melanie (IA): “Stay highly informed, and really invest in your own skills. Or as my professor used to say, “stay marketable.” Think of yourself as a product – it’s essential to continually invest in the utility of your skills to ensure your own personal product life cycle doesn’t move into decline and eventually sunset. Use changes in the profession as an opportunity to drive your own experimentation and skill development.”
Crispin: Deepest appreciation to each of your for sharing your unique perspectives and robust insight on AI and how it is impacting insights across the industry and around the world. It is incredible to have each of you participate in this valuable discussion. I also love all three of your Top-Tips. THANK YOU.
Remember to sign-up for the official announcement of the 2024 Insight250 winners by scanning on the QR code below or clicking on the following link: https://event.on24.com/wcc/r/4695262/131D68960E7697DB7E5C538E8DF644EE
Crispin Beale
Senior Strategic Advisor at mTab, CEO at Insight250, Group President at BehaviorallyCrispin Beale is a marketing, data and customer experience expert. Crispin spent over a decade on the Executive Management Board of Chime Communications as Group CEO of leading brands such as Opinion Leader, Brand Democracy, Facts International and Watermelon. Prior to this Crispin held senior marketing and insight roles at BT, Royal Mail Group and Dixons. Crispin originally qualified as a chartered accountant and moved into management consultancy with Coopers & Lybrand (PwC). Crispin has been a Board Director (and Chairman) of the MRS for c15 years and UK ESOMAR Representative for c10 years. As well as being CEO of Insight250, Crispin is currently Group President of Behaviorally with responsibility for the client and commercial teams globally and the Senior Strategic Advisor at mTab.
Article series
Insight250
- The importance of business sense in research
- The role of humour in effective leadership
- The importance of ethics
- The importance of disruption in innovation and leadership
- The importance of Disruption in Innovation and Leadership Part 2
- The importance of Diversity & Inclusion
- The impact of colour
- Communicating insight with impact
- Insights on leadership, culture and polling
- The evolution of electric vehicles
- 2022 Top tips (part 1)
- 2022 Top tips (part 2)
- Maximising the potential of data
- The importance of flexible working
- Winners
- The importance of advanced analytics
- Judges for the 2022 Insight250 Awards announced
- The evolution from social listening to digital intelligence
- The Judges' Perspective
- The judge's perspective - part 2
- Insight Climate Collective
- Insights technology
- Understanding employee ownership
- Global insight perspectives
- Top Tips from our Leaders and Innovators
- The Evolution of Insights in the Food & Beverage Market
- The Evolution of Insights in CPG
- Neural Mechanisms Behind Consumer Decision-Making
- Celebrating and Elevating the Insights Industry
- The State of the Insights Industry
- Opportunities, challenges and threats that AI presents
- 2024 Insight250 Winners Announcement
- Insight from the Insight250: Connecting Brands and Consumers Through Insights
- Insight from the Insight250: The Importance of Human Insight and Attention