The United Kingdom government announced on Monday, October 16, a Fairness Innovation Challenge to address bias and discrimination in artificial intelligence (AI) systems.
The challenge invites UK-based companies to apply for government investment of up to £400,000 to fund innovative new solutions aimed at eradicating bias from AI technologies.
The competition aims to support up to three groundbreaking projects, each potentially securing a funding boost of up to £130,000.
This initiative aligns with the UK’s commitment to hosting the world’s first major AI Safety Summit where dialogues will revolve around managing the risks associated with AI while maximising its potential for the benefit of the British people.
The Centre for Data Ethics and Innovation, operating under the Department for Science, Innovation, and Technology, has initiated the Fairness Innovation Challenge’s first round of submissions. The challenge aims to encourage the development of novel techniques to embed fairness in the creation of AI models.
The primary goal is to counter the threats posed by bias and discrimination by encouraging innovative approaches.
AI model developers are urged to consider a broader social context immediately.
UK Government emphasising fairness in AI
Fairness in AI systems is one of the fundamental principles laid out in the UK government‘s AI Regulation White Paper.
AI is a powerful tool for good, presenting near limitless opportunities to grow the global economy and deliver better public services.
In the UK, AI is already being trialled within the National Health Service (NHS) to aid clinicians in identifying cases of breast cancer, and it holds great potential in developing new drugs and treatments and addressing global challenges like climate change.
However, these opportunities can only be fully realised by addressing and rectifying issues related to bias and discrimination in AI systems.
Minister for AI, Viscount Camrose, says, “The opportunities presented by AI are enormous, but to fully realise its benefits we need to tackle its risks.”
“This funding puts British talent at the forefront of making AI safer, fairer, and trustworthy. By making sure AI models do not reflect bias found in the world, we can not only make AI less potentially harmful, but ensure the AI developments of tomorrow reflect the diversity of the communities they will help to serve,” adds Camrose.
Although several technical bias audit tools are available in the market, many of them are developed in the United States.
While companies can use these tools to identify potential biases in their systems, they often fail to align with UK laws and regulations, says the government.
Focus areas of the challenge
The challenge promotes a fresh UK-led approach that emphasises the social and cultural context in AI systems in addition to the technical considerations.
The challenge will focus on two main areas:
First one involves a partnership with King’s College London, where participants from the UK’s AI sector will work on mitigating bias in their generative AI models. These models, developed in collaboration with Health Data Research UK and the support of NHS AI Lab, are trained on anonymised records of over 10 million patients to predict potential health outcomes.
The second challenge is a call for ‘open use cases,’ where applicants can propose novel solutions tailored to address bias in their unique AI models and specific focus areas. It includes combating fraud, building new law enforcement AI tools, or assisting employers in creating fairer systems for analysing and shortlisting candidates during recruitment.
Companies currently face a range of challenges in tackling AI bias, including insufficient access to data on demographics and ensuring potential solutions meet legal requirements.
The CDEI is working in close partnership with the Information Commissioner’s Office (ICO) and the Equality and Human Rights Commission (EHRC) to deliver this Challenge.
The Fairness Innovation Challenge closes for submissions at 11am on Wednesday, December 13, 2023, with successful applicants notified of their selection on January 30, 2024.