Artificial intelligence could worsen health inequality for ethnic minorities

Technology systems rely on the data collated by studies that may be skewed by racial bias

A patient undergoing treatment called Ethos, a machine which uses artificial intelligence to deliver a prescription dose to tumours. PA

Artificial intelligence could widen health inequalities for minority ethnic groups despite its potential to radicalise health care, a report warn.

A paper by Imperial College London highlighted the opportunities and barriers for AI to improve the health of the UK’s minority ethnic groups.

Researchers recognised AI's potential in the diagnosis and treatment of diseases such as skin cancer. They also argued that if key challenges in the technology are not addressed, AI could backfire.

AI systems are created by combining large amounts of data, for example from research studies or the internet. The information is then used to “train” a computer program or algorithm to make decisions based on the data.

For example, using data, AI algorithms can create “risk scores” to predict which patients might be likely to develop certain diseases in the future.

If much of this data is unrepresentative of minority ethnic groups and focuses predominantly on, for example white participants, then these systems are more likely to make decisions that exclude diverse communities, they say.

They recommend improving diversity in the AI industry and academia, and developing legislation and regulation to reduce bias in data and the systems that harness them.

Health inequalities experienced by minority ethnic groups could worse, they warn, if current challenges such as biased algorithms, poor data collection and a lack of diversity in research and development are not urgently addressed.

Minority ethnic groups generally experience poorer health than the wider population, as emphasised by the Covid-19 pandemic.

The report presents evidence of this racial bias in AI, demonstrating how minority ethnic groups can be underserved by technology.

For example, facial recognition systems have shown to be up to 19 per cent less accurate at recognising images of black men and women compared to white individuals.

Such bias is also observed in AI when used in the detection and treatment of health conditions such as skin cancer. Images of white patients are predominantly used to train algorithms to spot melanoma, which could lead to worse outcomes for black people through missed diagnoses.

Dr Saira Ghafur, digital health lead at Imperial's Institute of Global Health Innovation, said: “AI has tremendous potential for healthcare system delivery. However, our white paper shows how it can exacerbate existing health inequities in minority ethnic groups. By working across government, health care and the technology sector, it is crucial we ensure that no one is left behind.”

Lord James O’Shaughnessy, visiting professor at the Institute of Global Health Innovation, said: “Tackling health inequality is one of the major challenges of our time. Advances in AI and machine learning give us new tools to tackle this challenge, but our enthusiasm must be tempered by a realistic appraisal of the risks of these technologies inadvertently perpetuating inequalities.”

Based on this research, the scientists made a series of recommendations to better enable AI for minority ethnic communities.

These include involving patients and the public in all areas of AI technology development, creating governance systems, legislation and regulation in AI which protect data and citizen’s rights, and developing a regulatory framework to ensure algorithms are tested on and appropriate for minority ethnic groups to reduce bias in data sets.

Updated: February 24, 2022, 12:24 PM
EDITOR'S PICKS