Transformation Directorate

The AI Ethics Initiative

Embedding ethical approaches to AI in health and care

  • What we do

    What is our role in ensuring the ethical adoption of AI in health and care?

  • Our projects

    Learn more about the research and practical interventions we’re supporting.

  • How to get involved

    Find out about opportunities to engage with our research and join our community of practice.

The AI Ethics Initiative supports research and practical interventions that could strengthen the ethical adoption of AI-driven technologies in health and care. We are translating principles into practice by building the evidence base needed to introduce new measures for mitigating risk and providing ethical assurance.

We invest in work that complements existing efforts to validate, evaluate and regulate AI-driven technologies. A primary focus of the Initiative is countering the inequalities that can arise from the ways in which these technologies are designed and deployed.

What we do

The NHS AI Lab is well placed to make a difference to the ethical assurance of AI given our role in supporting all aspects of the AI life cycle. This involves working with innovators as they define the purpose of their products, and guiding health and care professionals as they use these technologies to assist them in providing care.

A core focus of the AI Ethics Initiative is on how to counter the inequalities that may arise from the ways that AI-driven technologies are developed and deployed in health and care. We believe these inequalities aren’t inevitable, and that if they are proactively addressed we can realise the potential of AI for all users of health and care services.

We support projects that can demonstrate they are patient-centred, inclusive, and impactful. We collaborate with academia, the third sector, and other public bodies to achieve greater impact and positively transform how patients, citizens, and the workforce experience AI in health and care.

Community of practice

Join our Community for Racial and Ethnic Equity in AI on the Ai Virtual Hub for early insights into projects and to learn from researchers and practitioners working in this area. We hope to advance knowledge and inform practice related to the use of AI in healthcare, with the aim of improving health outcomes for minority ethnic populations in the UK.

Our projects

We have a range of research projects underway, delivering work within the following areas:

Governing the use of data for AI

We want to involve patients and the public in deciding how and why access to health data should be granted for AI purposes, and are working closely with the AI Imaging team on these projects.

Honing approaches to data stewardship

We have partnered with Sciencewise (UKRI) to hold a public dialogue that will inform which model(s) of data stewardship the AI Ethics Initiative should invest in developing and refining through further research, with reference to national medical imaging assets.

Data stewardship describes practices relating to the collection, management and use of data. There is a growing debate about what a ‘responsible’ approach to data stewardship entails, with some advocating for a more participatory approach. The AI Ethics Initiative is seeking to ensure that the data stewardship model used for national (medical imaging) assets inspires confidence among patients, the public and key stakeholders. The central question we will seek to explore is how access to data for AI purposes should be granted.

The participants in the dialogue will inform the Terms of Reference for a research competition (a ‘Participatory Fund for Patient-Driven AI Ethics Research’) that we will hold to improve data stewardship approaches for national medical imaging assets established by the NHS AI Lab and more broadly across the NHS.

There is an Oversight Group in place to provide advice on the dialogue process and materials. We are grateful to the following individuals for their time and invaluable input as members of this Group:

Oversight Group members

Natalie Banner (Chair), Genomics England

Kira Allmann, Ada Lovelace Institute

Phil Booth, medConfidential

Sophie Brannan, British Medical Association

Margaret Charleroy, Centre for Improving Data Collaborations, NHS Transformation Directorate

Vicky Chico, Office of the National Data Guardian

Mark Halling-Brown, Royal Surrey County Hospital

Ruth Keeling, Data Policy, NHS Transformation Directorate

Jasmine Leonard, Freelance

Sinduja Manohar, HDRUK

Joseph Savirimuthu, University of Liverpool

Laurence Thorne, Data Policy, NHS Transformation Directorate

Susheel Varma, ICO

Joseph Watts, Data Analytics, NHS Transformation Directorate

Improving how decisions about data access are made

We have partnered with the Ada Lovelace Institute to design a model for an Algorithmic Impact Assessment (AIA), which is a tool that enables users to assess the possible societal impacts of an algorithmic system before it is used.

The AIA is being trialled as part of the data access process for national medical imaging assets, such as the National Covid-19 Chest Imaging Database. It will entail researchers and developers engaging with patients and the public about the risks and benefits of their proposed AI solutions, prior to gaining access to medical imaging data for training or testing. The AIA thus helps address the question of why access to data for AI purposes should be granted.

Through the trial, we hope to demonstrate the value of involving patients and the public earlier in the development process, when there is greater flexibility to make adjustments and address possible concerns about AI systems.

Striving for health equity

We want to ensure that AI leads to improvements in health outcomes for minoritised populations.

We have partnered with the Health Foundation to support research in response to concerns about algorithmic bias. A research competition, enabled by the National Institute for Health and Research (NIHR), was held to address the racialised impact of algorithms in health and care and explore opportunities to improve health outcomes in minority ethnic groups.

While algorithmic bias does not only affect racialised communities, examples of deploying AI in the US indicate that there is a particular risk of algorithmic bias worsening outcomes for minority ethnic patients. At the same time, there has been limited exploration of whether and how AI can be applied to address racial and ethnic disparities in health and care.

There were two categories of this research competition:

1. Understanding and enabling opportunities to use AI to address health inequalities

The focus of this first category is on how to encourage approaches to innovation that are informed by the health needs of underserved minority ethnic communities and/or are bottom-up in nature.

2. Optimising datasets, and improving AI development, testing, and deployment

The focus of this second category is on creating the conditions to facilitate the adoption of AI that serves the health needs of minority ethnic communities. For example, this may include mitigating the risks of perpetuating and entrenching racial health inequalities through data collection and selection and during the development, testing, and deployment stages.

The following 4 projects were awarded 2-year funding in October 2021:

Assessing the acceptability, utilisation and disclosure of health Information to an automated chatbot for advice about sexually transmitted infections in minoritised ethnic populations

Dr Tom Nadarzynski at the University of Westminster

This project aims to raise the uptake of screening for STIs/HIV among minority ethnic communities through an automated AI-driven chatbot which provides advice about sexually transmitted infections. The research will also inform the development and implementation of chatbots designed for minority ethnic populations within the NHS and more widely in public health.

I-SIRch - Using artificial intelligence to improve the investigation of factors contributing to adverse maternity incidents involving Black mothers and families

Dr Patrick Waterson and Dr Georgina Cosma at Loughborough University

This project uses AI to investigate factors contributing to adverse maternity incidents amongst mothers from different ethnic groups. This research will provide a way of understanding how a range of causal factors combine, interact and lead to maternal harm. The aim is to inform the design of interventions that are targeted and more effective for these groups.

Ethnic differences in performance and perceptions of AI retinal image analysis systems (ARIAS) for the detection of diabetic retinopathy in the NHS Diabetic Screening Programme

Professor Alicja Rudnicka (St. George's Hospital) and Professor Adnan Tufail (Moorfields Eye Hospital and Institute of Ophthalmology, UCL). Co-investigators: The Homerton University Hospital, Kingston University, and University of Washington, USA.

This project aims to ensure that AI technologies that detect diabetic retinopathy work for all, by validating the performance of AI retinal image analysis systems that will be used in the NHS Diabetic Eye Screening Programme (DESP) in different subgroups of the population. This study will provide evidence of effectiveness and safety prior to potential commissioning and deployment within the NHS.

STANDING together (STANdards for Data INclusivity and Generalisability)

Dr. Xiaoxuan Liu and Professor Alastair Denniston at University Hospitals Birmingham NHS Foundation Trust

University Hospitals Birmingham NHS Foundation Trust and partners will lead STANDING Together, an international consensus process to produce standards for datasets underpinning AI systems, to ensure they are diverse, inclusive and can support the development of AI systems which work across all demographic groups. The resulting standards will help inform regulators, commissioners, policy-makers and health data institutions on whether AI systems are underpinned by datasets which represent everyone and don’t risk leaving underrepresented and minority groups behind.

The project is currently holding a public consultation on their draft recommendations on ensuring datasets are diverse and inclusive. The consultation finishes on 26 May and feedback can be given on the STANDING together website.

Building confidence in clinical use of AI

We want to improve the trustworthiness of AI systems and encourage appropriate confidence in their clinical use.

Strengthening accountability for AI through 'trustworthiness auditing'

AI accountability toolkits are being used to encourage trustworthiness in AI by enabling users to confront and address potential risks, such as algorithmic bias and opacity. For example, the algorithmic impact assessment (AIA) we are developing with the Ada Lovelace Institute is a type of ‘accountability toolkit’ intended to support AI developers with auditing their technology at an early stage and to ultimately increase trust in the use and governance of AI systems. Other accountability toolkits include commercial tools, such as Google’s ‘What-If’ Interface and IBM’s ‘Fairness 360’, which support users with making technical fixes to improve the interpretability of a model or to measure bias.

We are collaborating with the Wellcome Trust and Sloan Foundation to support the Oxford Internet Institute (OII) with developing the necessary evidence base and tools to assess and enhance the efficacy of AI accountability toolkits used in health and care. This project will complement our work with the Ada Lovelace Institute to trial an AIA, helping us to ensure that we have the necessary policies and standards in place to support the cultural and organisational adoption of such accountability toolkits.

The OII research team will ultimately produce a 'meta-toolkit' for trustworthy and accountable AI that comprises technical methods, best practice standards, and guidelines designed to encourage sustainable development, use, and governance of trustworthy and accountable AI systems. The meta-toolkit will help health and care practitioners, administrators, and policy-makers determine which accountability tools and practices are best suited to their particular use cases, and will ultimately be most effective at identifying and mitigating risks of AI systems at a local level.

Research published by the team as part of this project has already shown that all state of the art ‘bias preserving’ fairness methods in computer vision, used for example in medical imaging AI systems, make things fairer in practice by decreasing performance for the most disadvantaged groups. The team has recommended simple alternative best practices for improving performance without the need to ‘level down’ in the interest of fairness.

Developing appropriate confidence in AI among healthcare workers

We have partnered with Health Education England to research factors influencing healthcare workers’ confidence in AI-driven technologies and how their confidence can be developed through education and training. We have published two reports in relation to this research.

The first report argues that confidence in AI used in healthcare can be increased by establishing its trustworthiness through the governance and robust implementation of these technologies.

In the context of clinical decision making, once trustworthiness in AI technologies has been established, high confidence in AI-derived information may not always be desirable. For example, a clinician may accept an AI recommendation uncritically, potentially due to time pressure or limited experience in the clinical task - a tendency referred to as automation bias.

Read report one - Understanding healthcare workers’ confidence in AI

The second report determines educational and training requirements, and presents pathways for education and training offerings to develop the workforce’s confidence in AI.

The report calls for the fundamentals of AI to be added to training courses for all health and care professionals, and for more advanced specialist training for other health and care staff depending on their roles and responsibilities whether in procurement, implementation or if they may be using AI in clinical practice.

Read report two - Developing healthcare workers’ confidence in AI

How to get involved

We have a Community for Racial and Ethnic Equity in AI on the Future NHS platform.

The purpose of this community of practice is to bring together researchers, innovators, healthcare practitioners, civil society groups and members of the public to:

  • facilitate connections that benefit the delivery and impact of relevant research, including making international links
  • convene ‘Insight sessions’ for researchers to share developments in their work with wider audiences, including the public
  • disseminate early research findings and elicit constructive feedback and support
  • share successes, challenges and learnings as part of the research process with one another.

We welcome you to join our NHS AI Virtual Hub and become a member of our community of practice.

Related Pages