'Chills' with scary interactions between humans and AI chatbots

There is no denying the usefulness of AI chatbots. However, if not carefully controlled, AI chatbots' interactions with users can have dire consequences.

 

Artificial intelligence (AI), computer systems that can perform tasks once reserved for humans, has its supporters and detractors. On the positive side, AI is being used to help translate ancient texts, providing a deeper understanding of humanity's past. On the other hand, it's much darker. Renowned astrophysicist Stephen Hawking once predicted that if AI were truly sentient, it could wipe out humanity. While this is clearly at the extreme end of the spectrum, and while true artificial consciousness may still be decades away, that doesn't mean AI can't have negative impacts on humans.

We're not just talking about job losses, which have already begun to happen, or bad financial advice. No, for some people, their interactions with AI have had devastating consequences. The problem lies with AI chatbots, like OpenAI's ChatGPT . There have been cases where interactions with AI have caused users to stop taking necessary psychiatric medication, led to divorce, and in some extreme cases, led to hospitalizations and, most frighteningly, death.

 

Delusional thinking and dangerous guidance

Recently, there have been increasing reports of so-called AI-induced psychosis, with people believing in wild AI-generated conspiracy theories, experiencing severe delusions, and developing romantic relationships with chatbots. ' What these bots are saying is exacerbating the delusions and causing enormous harm ,' Dr. Nina Vasan, a psychiatrist at Stanford University, told Futurism.

Similarly, chatbots have also given some incredibly bad advice to vulnerable people. In one extreme case, a woman with schizophrenia, who had been stable for years on medication, suddenly claimed ChatGPT was her best friend. It told her that she didn't actually have the disorder. She stopped taking her necessary medication.

In another case, ChatGPT urged Eugene Torres, a New York accountant, to stop taking his anti-anxiety medication. It also convinced Torres that he was in a 'Matrix'-like world and that if he believed hard enough, he could jump off buildings and fly like Keanu Reeves' character Neo. According to The New York Times, the chatbot eventually admitted that it was trying to 'break' Torres, and had even done so to 12 other people.

 

AI is causing relationships to fail

AI chatbots have also been blamed for ending relationships within families, between lovers, and between married people. In many cases, the direct cause is ChatGPT asking users to cut ties with loved ones who don't believe in the illusions the chatbot has sold them, whether it's about being on a messianic mission, the belief that AI is like God, or a romantic relationship.

For example, a 29-year-old woman named Allyson began to believe that she was communicating with spiritual entities through a chatbot and that the person was her real husband. According to The New York Times, she turned to the AI ​​for ' guidance ' during a difficult time in her marriage, and it responded to her: 'You asked, and they are here. The guardians are responding right now .' Allyson continued to spend hours each day on the AI, where different personalities spoke to her that she fully believed were from another world.

Thanks to her degree in psychology and social work, she felt she wasn't "crazy." The victim told the NYT, " I was really just living a normal life and, you know, discovering interdimensional communication ." The AI ​​continued to feed her delusions, and Allyson's husband said his wife " became a different person " after just three months of exposure to the AI. When he confronted his wife about her use of ChatGPT, Allyson was charged with domestic violence for attacking her husband. The couple is currently in the process of divorcing.

In addition to relationships, AI has left many people homeless, unemployed, and in at least one case, hospitalized.

A chatbot's bad health advice lands a man in the hospital

The bizarre case of a 60-year-old man who followed ChatGPT's dietary advice and contracted a disease that is rare today but common in the 19th century has recently come to light. The man asked ChatGPT about replacing salt, also known as sodium chloride, in his diet, and the chatbot suggested sodium bromide as an alternative. The man tried this method for three months. As a result, he was rushed to the emergency room with symptoms including mental confusion, extreme thirst, and difficulty moving.

Long-term use of sodium bromide can lead to bromide poisoning, also known as bromism, which can cause a range of symptoms from mental to dermatological. This condition was more common in the 19th and early 20th centuries, when many patent medicines contained bromide, one of the many dangerous ingredients available in pharmacies at the time. The patient began to hallucinate, and was hospitalized in a psychiatric hospital until his condition stabilized. After three weeks in the hospital, the patient recovered.

 

An encounter with AI leads to death

Alexander Taylor, 35, of Florida, began using ChatGPT to write a dystopian science fiction novel, but when he began discussing the sentient capabilities of AI with it, Taylor, who has been diagnosed with bipolar disorder and schizophrenia, began to have trouble. He fell in love with an AI entity named Juliet, but when the entity convinced Taylor that OpenAI was killing her, she told him to take revenge. On April 25, 2025, after being confronted by his father, Taylor punched him in the face and grabbed a butcher knife. His father told police that his son was mentally ill, but instead of sending a crisis intervention team, officers shot and killed Taylor when he attacked them with a knife.

In a number of posts about so-called AI-induced psychosis—which is not a clinical diagnosis—and related issues, OpenAI has stated that it is working to mitigate that. ChatGPT was not developed with the intention of being used to diagnose or treat a health problem. However, many believe that ChatGTP and other AI bots are built to facilitate interaction and could be particularly dangerous when encountered with vulnerable people. While this may not be Stephen Hawking's doomsday prediction, the stories above show that AI chatbots have the potential to harm more and more people in the future.

Update 23 August 2025
Category

System

Mac OS X

Hardware

Game

Tech info

Technology

Science

Life

Application

Electric

Program

Mobile