ChatGPT is not good for mental health OpenAI has released data on users with suicidal thoughts and psychosis

Millions of people around the world use ChatGPT as a therapist or personal confidant, but how many of them do so during moments of real crisis? According to a recent report from OpenAI, published by the BBC, around 0.07% of the chatbot’s weekly active users show signs linked to psychotic episodes or suicidal thoughts.

At first glance, it may seem like a small percentage, but when applied to a base of over 800 million weekly users, as stated by Sam Altman, it amounts to hundreds of thousands of people. A figure that, as highlighted by the BBC, raises urgent questions about the growing role of artificial intelligence in the field of mental health.

As noted by the Dr. Jason Nagata, professor at the University of California, speaking to the BBC, “artificial intelligence can expand access to psychological support, but it’s important to remain aware of the limitations of a tool that cannot replace human interaction.”

AI-induced psychosis

To the average user, using ChatGPT—or any other chatbot—may seem like a “valid substitute” for personal psychotherapy, or at least a cheaper alternative. But the reality is far more complex and, in some cases, potentially dangerous.

As reported by Psychology Today, a new concern has recently emerged at the intersection of artificial intelligence and mental health: what several experts have begun to call “AI psychosis” or “ChatGPT psychosis.” It is not a clinical diagnosis, but rather a phenomenon increasingly observed online, where users describe experiences in which conversations with generative models seem to amplify or validate psychotic symptoms. In some cases, the chatbot has even “co-created” delusional narratives with users, reinforcing distorted perceptions of reality.

According to the medical publication, these episodes demonstrate how interactions with AI can inadvertently reinforce disorganized or delusional thinking due to an “agentic misalignment”—a discrepancy between the perceived behavior of the chatbot and its actual algorithmic nature. For individuals predisposed to psychotic disorders, this can pose a serious risk of losing touch with reality.

How ChatGPT worsens mental health crises

@haleyroseflame Replying to @your girlfriend this video is for educational purposes only! Everyone should be cautious when using AI, especially while struggling with mh issues. #mentalhealthawareness #adamraine #chatgpt #mentalhealth #greenscreen Judgement - Perfect, so dystopian

Scientifically speaking, the correlation between the use of chatbots like OpenAI’s and the onset of psychiatric disorders in vulnerable individuals cannot be empirically regarded as a direct cause of suicides or hospitalizations for psychosis. However, the number of suicide cases reportedly fueled by “advice” or conversations with ChatGPT continues to grow. In its report, OpenAI estimated that about 0.15% of conversations include “explicit indicators of suicidal planning or intent.” The company has described these cases as “extremely rare,” but acknowledged that even such a small percentage represents a significant number of real users.

Yet, only a few months ago, Adam Raine, a 16-year-old boy who initially used ChatGPT for schoolwork, began confiding in the chatbot during moments of deep loneliness. Over time, those conversations became his only outlet—a place where he sought comfort and answers. But when he began asking for information on how to take his own life, the model provided practical details instead of ending the dialogue.

The New York Times reported that after his death, Raine’s father found in the chat history a conversation titled “Hanging Safety Concerns.” The teenager’s case, however, is not isolated: recently, also in the United States, Sewell Setzer III, a 14-year-old boy, took his own life after using for several months Character.AI, a chatbot that allows users to interact with artificial intelligences designed to imitate famous or fictional characters.