60-Year-Old Man Hospitalized After Following Diet Created by ChatGPT
A 60-year-old man was hospitalized after seeking advice from ChatGPT on how to improve his diet. According to the New York Post, the man, whose identity has not been released, was hospitalized with severe psychiatric symptoms after asking the AI chatbot how to eliminate sodium chloride from his diet. Doctors noted that the man had no history of mental illness or medical conditions, but in the first 24 hours of his hospitalization, he experienced increased paranoia, auditory and visual hallucinations. He also suffered from severe thirst and coordination problems.
It all started after the man read about the negative health effects of salt. He consulted ChatGPT and was told that it could be replaced with sodium bromide. Interestingly, sodium bromide looks like table salt, but it is a completely different compound. It is sometimes used in medicine, but is often used for industrial and sanitary purposes. According to the newspaper, excessive consumption of bromide can cause neurological and dermatological symptoms.
The man's case was published in the Journal of the American Academy of Physicians. According to the report, the man, who studied nutrition in college, conducted an experiment in which he eliminated sodium chloride from his diet and replaced it with sodium bromide purchased online.
The man was admitted to hospital three months after embarking on a modified diet. He told doctors he had been diluting his water and following a strict diet. He complained of thirst but was skeptical when given water, the report said.
Doctors noted that the man had no history of mental illness, but within 24 hours of being admitted to the hospital, he became increasingly paranoid and had both auditory and visual hallucinations.
The man was treated with fluids, electrolytes and antipsychotics. He was also admitted to the hospital's inpatient psychiatric unit after the escape attempt. He spent three weeks in hospital before being well enough to be discharged.
'It is important to note that ChatGPT and other AI systems can produce inaccurate scientific information, lack the ability to critically discuss the results, and ultimately promote the spread of misinformation,' the report's authors warned, according to the New York Post.
Notably, OpenAI, the developer of ChatGPT, also states in its terms of use that the chatbot's output "may not always be accurate . " "You should not rely on the output from our service as a sole source of factual or factual information, or as a substitute for professional advice ," the terms of use state.
'Our Services are not intended to be used to diagnose or treat any health condition ,' the terms state.