• How can AI Systems Learn to make Fair Choices
    Simone Stumpf with a prototype human-in-the-loop feedback system displayed on a laptop

News & Views

How can AI Systems Learn to make Fair Choices

Jul 19 2022

“This problem is not only limited to loan applications but can occur in any place where bias is introduced by either human judgement or the AI itself. Imagine an AI learning to predict, diagnose and treat diseases but it is biased against particular groups of people. Without checking for this issue you might never know! Using a human-in-the-loop process like ours that involves clinicians you could then fix this problem,” Simone Stumpf 

Researchers from the University of Glasgow and Fujitsu Ltd have teamed up for a year-long collaboration, known as ‘End-users fixing fairness issues’, or Effi - to help artificial intelligence (AI) systems make fairer choices by lending them a helping human hand.

AI has become increasingly integrated into automated decision-making systems in healthcare, as well as industries such as banking and some nations’ justice systems. Before being used to make decisions, the AI systems must first be ‘trained’ by machine learning, which runs through many different examples of human decisions they will be tasked with making. Then, it learns how to emulate making these choices by identifying or ‘learning’ a pattern. However, these decisions can be negatively affected by the conscious or unconscious biases of the humans who made these example decisions. On occasion, the AI itself can even ‘go rogue’ and introduce unfairness. 

Addressing AI system bias

The Effi project is setting out to address some of these issues with an approach known as ‘human-in-the-loop’ machine learning which more closely integrates people into the machine learning process to help AIs make fair decisions. It builds on previous collaborations between Fujitsu and Dr Simone Stumpf, of the University of Glasgow’s School of Computing Science, which have explored human-in-the-loop user interfaces for loan applications based on an approach called explanatory debugging; this enables users to identify and discuss any decisions they suspect have been affected by bias. From that feedback the AI can learn to make better decisions in the future. 

“This problem is not only limited to loan applications but can occur in any place where bias is introduced by either human judgement or the AI itself. Imagine an AI learning to predict, diagnose and treat diseases but it is biased against particular groups of people. Without checking for this issue you might never know! Using a human-in-the-loop process like ours that involves clinicians you could then fix this problem,” Dr Stumpf told International Labmate.

Trustworthy systems urgently needed

“Artificial intelligence has tremendous potential to provide support for a wide range of human activities and sectors of industry. However, AI is only ever as effective as it is trained to be. Greater integration of AI into existing systems has sometimes created situations where AI decision makers have reflected the biases of their creators, to the detriment of end-users. There is an urgent need to build reliable, safe and trustworthy systems capable of making fair judgements. Human-in-the-loop machine learning can effectively guide the development of decision making AIs in order to ensure that happens. I’m delighted to be continuing my partnership with Fujitsu on the Effi project and I’m looking forward to working with my colleagues and our study participants to move forward the field of AI decision making,” she added.

Dr Daisuke Fukuda, the head of the research centre for AI Ethics, Fujitsu Research of Fujitsu Ltd, said: ”Through the collaboration with Dr Simone Stumpf, we have explored diverse senses of fairness of artificial intelligence in people around the world. The research led to the development of systems to reflect diverse senses into AI. We think of the collaboration with Dr Stumpf as a strong means to make Fujitsu's AI Ethics proceed. In this time, we will challenge the new issues to make fair AI technology based on the thoughts of people. As the demand for AI Ethics grows in the whole society including industry and academia, we hope that Dr Stumpf and Fujitsu continue to work together to make research in Fujitsu contribute to our society.”

More information online


Digital Edition

Lab Asia 31.2 April 2024

April 2024

In This Edition Chromatography Articles - Approaches to troubleshooting an SPE method for the analysis of oligonucleotides (pt i) - High-precision liquid flow processes demand full fluidic c...

View all digital editions

Events

SETAC Europe

May 05 2024 Seville, Spain

InformEx Zone at CPhl North America

May 07 2024 Pennsylvania, PA, USA

ISHM 2024

May 14 2024 Oklahoma City, OK, USA

ChemUK 2024

May 15 2024 Birmingham, UK

Water Expo Nigeria 2024

May 21 2024 Lagos, Nigeria

View all events