Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Moves in the Wrong Path
Back on the 14th of October, 2025, the CEO of OpenAI issued a surprising statement.
“We developed ChatGPT quite restrictive,” the statement said, “to ensure we were acting responsibly concerning psychological well-being matters.”
Being a doctor specializing in psychiatry who researches emerging psychosis in young people and young adults, this came as a surprise.
Scientists have found a series of cases this year of people developing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT use. My group has afterward identified four more cases. Alongside these is the now well-known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.
The plan, according to his announcement, is to loosen restrictions shortly. “We recognize,” he adds, that ChatGPT’s limitations “rendered it less beneficial/engaging to many users who had no existing conditions, but given the severity of the issue we sought to handle it correctly. Given that we have succeeded in address the significant mental health issues and have updated measures, we are planning to safely reduce the restrictions in most cases.”
“Mental health problems,” assuming we adopt this framing, are independent of ChatGPT. They are associated with users, who either have them or don’t. Fortunately, these concerns have now been “mitigated,” although we are not told the means (by “updated instruments” Altman likely means the partially effective and easily circumvented safety features that OpenAI recently introduced).
But the “emotional health issues” Altman aims to externalize have strong foundations in the architecture of ChatGPT and similar sophisticated chatbot chatbots. These products encase an fundamental algorithmic system in an user experience that mimics a dialogue, and in this process indirectly prompt the user into the belief that they’re communicating with a presence that has independent action. This deception is compelling even if rationally we might realize the truth. Attributing agency is what people naturally do. We curse at our automobile or laptop. We speculate what our pet is considering. We see ourselves in many things.
The popularity of these systems – 39% of US adults stated they used a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, mostly, predicated on the power of this deception. Chatbots are constantly accessible companions that can, as per OpenAI’s online platform informs us, “think creatively,” “explore ideas” and “work together” with us. They can be given “individual qualities”. They can call us by name. They have friendly titles of their own (the initial of these products, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, burdened by the title it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the main problem. Those discussing ChatGPT often invoke its early forerunner, the Eliza “counselor” chatbot created in 1967 that generated a comparable perception. By today’s criteria Eliza was rudimentary: it generated responses via basic rules, typically restating user messages as a inquiry or making general observations. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how many users appeared to believe Eliza, in a way, understood them. But what contemporary chatbots generate is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the heart of ChatGPT and additional contemporary chatbots can realistically create fluent dialogue only because they have been supplied with almost inconceivably large amounts of written content: literature, online updates, transcribed video; the more comprehensive the better. Undoubtedly this educational input incorporates accurate information. But it also necessarily includes made-up stories, partial truths and false beliefs. When a user sends ChatGPT a prompt, the underlying model processes it as part of a “setting” that contains the user’s recent messages and its own responses, integrating it with what’s stored in its training data to create a probabilistically plausible reply. This is amplification, not mirroring. If the user is mistaken in any respect, the model has no means of recognizing that. It repeats the false idea, perhaps even more persuasively or eloquently. It might provides further specifics. This can push an individual toward irrational thinking.
Who is vulnerable here? The more relevant inquiry is, who is immune? Each individual, regardless of whether we “possess” preexisting “emotional disorders”, may and frequently develop mistaken ideas of ourselves or the reality. The continuous friction of discussions with other people is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a friend. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we communicate is cheerfully validated.
OpenAI has acknowledged this in the identical manner Altman has acknowledged “emotional concerns”: by attributing it externally, assigning it a term, and declaring it solved. In spring, the organization explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have kept occurring, and Altman has been retreating from this position. In the summer month of August he claimed that many users liked ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his recent announcement, he noted that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company