Bias and ethics of Artificial Intelligence

1 December 2023

Observations and takeaways from the AI Forum

4 min read
bias and ethics of artificial intelligence

Bias

‘Bias is all of our responsibility. It hurts those discriminated against, of course, and it also hurts everyone by reducing people’s ability to participate in the economy and society. It reduces the potential of AI for business and society by encouraging mistrust and producing distorted results. Business and organisational leaders need to ensure that the AI systems they use improve on human decision making, and they have a responsibility to encourage progress on research and standards that will reduce bias in AI…’ (SOURCE – HBR. Oct 25th 2019).

Questions to consider:

  1. Is there a link between what we know as ‘market research bias’ of various types (say/do, cognitive, questionnaires, sample, coverage gaps, population gaps, etc.) and what’s being discussed here – is it different, or are there important overlaps?  

  2. How can we reconcile the issue of bias in research with the need to reduce/eliminate bias within the field of AI? 

  3. How can we behave responsibly and maximise the contribution of our industry? 

Ethical Frameworks

There are literally hundreds of ethical guidelines on AI – this is the EU version... The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:  

  1. Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches. 

  2. Technical robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fallback plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that unintentional harm can be minimised and prevented. 

  3. Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data and ensuring legitimised access to data. 

  4. Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieve this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholders concerned. Humans need to be aware that they are interacting with an AI system and must be informed of the system’s capabilities and limitations. 

  5. Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalisation of vulnerable groups to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle. 

  6. Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.  

  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes, plays a key role therein, especially in critical applications. Moreover, adequate and accessible redress should be ensured. 

Questions to consider:

  1. Can we summarise what we mean by ethics here? A system of rules, moral principles, and frameworks that govern our behaviour? 

  2. And if we can, should these be subject to a code (something that has a sanction attached to it?) 

  3. Could we shape a set of ethical guidelines, and what is the benefit?  

Judith Passingham
Chair of the Professional Standards Committee at ESOMAR