Artificial intelligence that predicts when people will die

[ad_1]

Life2vec it’s a model Artificial intelligence can predict premature death with 78 percent accuracy. The new artificial intelligence is the result of research from the Technical University of Denmark, published today Nature Computational Science.

How Life2vec works

The system uses a similar architecture to ChatGPT. It was trained using a database containing personal and socio-demographic information of six million Danes, which are provided by the country’s government authorities. Through a deep learning neural network based on a large-scale language model, Life2vec analyzes parameters such as education, health, income and employment to provide predictions of future events of individual individuals.

The data used for the training phase covers the period between 2008 and 2016. But the researchers had information on deaths up to 2020, which they used to compare their hypotheses with real data. The experiment proved that the AI ​​model’s predictions paid off accurate almost eight times out of ten. Most of the mistakes were related to heart attacks and accidents. Life2vec determined that among the factors that increase the risk of premature death these include being male, having been diagnosed with a mental disorder, and having a low income.

The methodology used by the Danish team enabled Life2vec to obtain the rate accuracy 11 percent higher than similar systems. The researchers say the discovery represents a significant leap forward in the analysis of complex data and offers potential applications in areas such as public health, social planning and understanding socio-demographic patterns.

An ethical question

While the use of modeling techniques to predict future diseases or improve living conditions is laudable, analysts point out that this type of technology also increases ethical issues and opens the possibility possible abuse.

Youyou Wu, a psychologist at University College London, warns that algorithms can have a negative impact in case of misuse for discriminatory purposes or to make decisions that affect people’s social, personal and professional security.

Several efforts are currently underway to moderate the progress of artificial intelligence systems. It was approved by the European Union at the beginning of DecemberThe AI ​​Act, the world’s first artificial intelligence law, which will use the blockchain to regulate the development and use of the technology. Even in his message regarding the World Day of Peace Pope Francis He advocated for a regulatory framework, called on world leaders to create a legally binding international treaty on AI, and echoed the positions of companies in the sector such as Google and OpenAI.

This article originally appeared on Wired en español.

[ad_2]

Source link

Leave a Comment