Could an AI tool accurately predict when each of us will die?

Researchers in Denmark say they have created a method related to ChatGPT that gives more reliable results than anything seen before

Dilemmas thrown up by the technology behind AI-driven life expectancy data are nothing new, one expert said. Getty Images
Powered by automated translation

Content warning: This article contains themes of death that some readers might find upsetting

A Chat GPT-style artificial intelligence system that uses a person's life events to predict how long they will live is said to be more accurate than any previous method.

The AI model was trained on the personal data of Denmark's population collected from 2008 and 2020.

This included information regarding income, working hours, medical history, education and place of residence, as well as a person's type of job, the industry they work in and the benefits they receive.

The study was based on information from Statistics Denmark, the country's National Patient Registry and data on the labour market.

It means you're able to understand and compute your choices and your lifespan
Joanna Bryson, professor of ethics and technology at Hertie School in Berlin

While factors such as earning more are often linked to a longer life, being male or having a diagnosis of mental illness tend to be associated with an earlier death.

Scientists from the Technical University of Denmark (DTU) published their findings in an article called Using sequences of life-events to predict human lives in the journal Nature Computational Science.

"We used the model to address the fundamental question: to what extent can we predict events in your future based on conditions and events in your past?" Prof Sune Lehmann of DTU and the study's senior author said in a statement.

"Scientifically, what is exciting for us is not so much the prediction itself, but the aspects of data that enable the model to provide such precise answers."

A major focus of the research was in predicting the likelihood that a person would die within the next four years.

The researchers claimed the overall accuracy of their methods in predicting human lives was 11 per cent greater than seen with any other model.

With the highly detailed data available – a data set described by the researchers as unique – they showed, they said in their paper, "that accurate individual predictions are indeed possible".

AI is one of humanity's 'biggest threats,' says Elon Musk

AI is one of humanity's 'biggest threats,' says Elon Musk

The model used in the study, called life2vec, employs developments in natural language processing that made the likes of ChatGPT possible to build what the researchers describe as "complex contextual representations" of key aspects of people's lives, including their health, wealth and occupation.

"What's exciting is to consider human life as a long sequence of events, similar to how a sentence in a language consists of a series of words.

"This is usually the type of task for which transformer models in AI are used but in our experiments we use them to analyse what we call life sequences, events that have happened in human life," Prof Lehmann said.

He said tech companies already used similar technology to predict the behaviour of humans and to influence them based on the analysis of behaviour on social networks, allowing them to profile people "extremely accurately".

However, the new AI model potentially raises thorny political questions about how to regulate technology that uses personal data to develop predictions.

"This discussion needs to be part of the democratic conversation so that we consider where technology is taking us and whether this is a development we want," Prof Lehmann said.

Interest in life expectancy

Prof Joanna Bryson, professor of ethics and technology at Hertie School in Berlin, said the technology could offer benefits.

"I think in general there's a lot of interest in life expectancy data," she said.

"It's positive. It means you're able to understand and compute your choices and your lifespan, although it's always going to be on a probability distribution.

However, she noted concerns could be raised over privacy and oversurveillance and said it was important that safeguards were in place to protect people and prevent "dystopian situations", such as when health insurance companies can monitor customers closely.

The type of knowledge generated by AI could be used by companies to offer people more targeted healthcare advertisements, which, although not possibly not welcomed by some, "might get you the services you need".

Prof Bryson also said the dilemmas thrown up by the latest technology were nothing new.

She highlighted genetic testing has already been able to show an individual or any children they may have could be at high risk of developing a particular illness.

"It's exciting because AI is flavour of the month. But we have had these problems for a long time," Prof Bryson said.

While there may be a perception that regulators struggle to keep up with developments in fields such as AI, Prof Bryson said she did not think this was the case.

Instead, she said it was more a case that the industry itself had found it hard to deal with the issues the technology generated.

"They're struggling with the fact that they have responsibility," she said. "We have to establish what good practice is."

The EU's AI Act, which was agreed on by the European Parliament and the European Council this month, as well as the EU's General Data Protection Regulation (GDPR) legislation, should offer effective regulation, Prof Bryson suggested, adding the media had an important role too in helping to maintain good practice with AI.

An alarming prospect

Adnan Bashir, a technology commentator and global lead for external communications at Hansen Technologies, a global software company, advised caution about "going full steam ahead" with AI technologies such as these.

"Data selection and input is only as good as the engineers, programmers and developers who work on these foundational AI models," he said.

"We need to be wary of an inherent bias and prejudice that may feed into these processes – and which may inadvertently make the wrong assumption about marginalised groups and minorities.

The consequences of not exercising the high degree of caution in this regard could be severe."

Mr Bashir said that some governments worldwide had "not always done a stellar job" in keeping pace with advancements in the internet and social media and it was important to "avoid repeating that mistake".

"AI development absent of oversight is an alarming prospect."

Updated: December 22, 2023, 11:44 AM