
By Marilú Acosta
The World Health Organization (WHO) will make a statement on May 16, 2023 on the use of artificial intelligence in healthcare. Specifically on the use of the Large Language Model (LLM), it is concerned about the risks of this chatGPT-type technology. WHO knows first-hand the global inequality in health, the lack of resources in most populations on all continents and the intermingled causes of international mortality, showing how disparate access to health is. In the world, in 2019, the first cause of death was acute myocardial infarction, the fourth was respiratory infections and the eighth was diarrhea. The first is ninja-like, silent and surprising, the fourth and eighth are absolutely and shamefully preventable. The WHO is enthusiastic about the idea of solving these health challenges with the help of technology, although it warns that in the face of LLMs it is not taking the care that has normally been taken when adopting other technologies. For this organization, the use of artificial intelligence tools such as chatGPT lacks transparency, inclusiveness, public engagement, expert oversight and rigorous evaluation. According to the organization, everyone is jumping the gun, as with COVID19 vaccines. The WHO believes that this mad rush will generate errors in healthcare personnel that will affect patients, undermine confidence in artificial intelligence and therefore delay its potential long-term benefit, as well as its use around the world. They say.