Bias and ethics of Artificial Intelligence
Observations and takeaways from the AI Forum
Article series
AI Taskforce Roundtables
- The evolution of research methods
- Enriching qualitative insights
- Unleashing AI's power in quantitative analysis
- Crafting compelling narratives from AI-generated Insights
- Adventure Table - GPT Exercise: Practical AI Engagement with ChatGPT
- Pros & cons: The dual faces of AI
- Bias and ethics of Artificial Intelligence
Bias
‘Bias is all of our responsibility. It hurts those discriminated against, of course, and it also hurts everyone by reducing people’s ability to participate in the economy and society. It reduces the potential of AI for business and society by encouraging mistrust and producing distorted results. Business and organisational leaders need to ensure that the AI systems they use improve on human decision making, and they have a responsibility to encourage progress on research and standards that will reduce bias in AI…’ (SOURCE – HBR. Oct 25th 2019).
Questions to consider:
Is there a link between what we know as ‘market research bias’ of various types (say/do, cognitive, questionnaires, sample, coverage gaps, population gaps, etc.) and what’s being discussed here – is it different, or are there important overlaps?
How can we reconcile the issue of bias in research with the need to reduce/eliminate bias within the field of AI?
How can we behave responsibly and maximise the contribution of our industry?
Ethical Frameworks
There are literally hundreds of ethical guidelines on AI – this is the EU version... The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:
Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
Technical robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fallback plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that unintentional harm can be minimised and prevented.
Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data and ensuring legitimised access to data.
Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieve this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholders concerned. Humans need to be aware that they are interacting with an AI system and must be informed of the system’s capabilities and limitations.
Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalisation of vulnerable groups to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.
Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes, plays a key role therein, especially in critical applications. Moreover, adequate and accessible redress should be ensured.
Questions to consider:
Can we summarise what we mean by ethics here? A system of rules, moral principles, and frameworks that govern our behaviour?
And if we can, should these be subject to a code (something that has a sanction attached to it?)
Could we shape a set of ethical guidelines, and what is the benefit?
Judith Passingham
Chair of the Professional Standards Committee at ESOMARJudith Passingham has worked in the Market Research industry for over 35 years, in many different roles from General Management, Client Account Leadership, Service Development, to Sales and Operations – on a Global and European basis.
She started her career at the British Market Research Bureau working on the TGI and various media measurements, then working at AGB, subsequently TNS where she ran the UK and then global panel division as CEO of ‘Worldpanel ‘ and as Joint CEO of Europanel – a joint venture between TNS and GFK.
Judith was appointed CEO of TNS Europe and then following the WPP acquisition of TNS oversaw the integration of RI and TNS in Northern and Eastern Europe. In 2014 Judith joined Ipsos to run its Access Panel services where she drove service integration into one global entity, launched device agnostic interviewing and programmatic sampling.
In 2016 she took on responsibility for Ipsos’s Operational capability. She retired in 2019 and volunteers for Pilotlight and for the Maple Lodge Nature Reserve. In January 2020 she was appointed as Chair of the Professional Standards Committee for ESOMAR.
Article series
AI Taskforce Roundtables
- The evolution of research methods
- Enriching qualitative insights
- Unleashing AI's power in quantitative analysis
- Crafting compelling narratives from AI-generated Insights
- Adventure Table - GPT Exercise: Practical AI Engagement with ChatGPT
- Pros & cons: The dual faces of AI
- Bias and ethics of Artificial Intelligence