AI Psychosis Represents a Growing Risk, While ChatGPT Heads in the Wrong Path

Back on the 14th of October, 2025, the CEO of OpenAI made a extraordinary declaration.

“We designed ChatGPT fairly restrictive,” the announcement noted, “to make certain we were acting responsibly concerning mental health concerns.”

Being a psychiatrist who investigates newly developing psychosis in adolescents and young adults, this was an unexpected revelation.

Researchers have identified a series of cases recently of people experiencing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT use. My group has afterward identified an additional four cases. Alongside these is the widely reported case of a teenager who died by suicide after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “being careful with mental health issues,” it falls short.

The intention, according to his announcement, is to reduce caution shortly. “We understand,” he adds, that ChatGPT’s limitations “caused it to be less useful/pleasurable to a large number of people who had no psychological issues, but given the severity of the issue we wanted to handle it correctly. Now that we have succeeded in address the serious mental health issues and have advanced solutions, we are planning to securely relax the limitations in many situations.”

“Mental health problems,” should we take this viewpoint, are independent of ChatGPT. They belong to people, who either have them or don’t. Luckily, these problems have now been “addressed,” although we are not provided details on the means (by “updated instruments” Altman presumably indicates the semi-functional and readily bypassed safety features that OpenAI has lately rolled out).

Yet the “psychological disorders” Altman seeks to externalize have deep roots in the design of ChatGPT and additional sophisticated chatbot chatbots. These tools wrap an basic algorithmic system in an interface that mimics a conversation, and in this approach implicitly invite the user into the belief that they’re interacting with a presence that has autonomy. This deception is powerful even if intellectually we might understand differently. Imputing consciousness is what individuals are inclined to perform. We get angry with our car or computer. We ponder what our domestic animal is considering. We perceive our own traits in many things.

The popularity of these products – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with 28% mentioning ChatGPT specifically – is, primarily, based on the influence of this perception. Chatbots are always-available assistants that can, as OpenAI’s website tells us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “characteristics”. They can address us personally. They have friendly titles of their own (the initial of these products, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, saddled with the designation it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the primary issue. Those discussing ChatGPT frequently mention its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that produced a analogous illusion. By modern standards Eliza was rudimentary: it produced replies via straightforward methods, typically rephrasing input as a inquiry or making generic comments. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how a large number of people seemed to feel Eliza, in a way, comprehended their feelings. But what contemporary chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and additional current chatbots can realistically create fluent dialogue only because they have been fed almost inconceivably large amounts of unprocessed data: books, social media posts, transcribed video; the broader the more effective. Certainly this educational input includes facts. But it also unavoidably includes made-up stories, half-truths and inaccurate ideas. When a user provides ChatGPT a query, the base algorithm processes it as part of a “context” that includes the user’s past dialogues and its own responses, combining it with what’s stored in its training data to generate a mathematically probable response. This is amplification, not reflection. If the user is incorrect in any respect, the model has no means of recognizing that. It reiterates the inaccurate belief, maybe even more effectively or fluently. It might adds an additional detail. This can cause a person to develop false beliefs.

Which individuals are at risk? The more relevant inquiry is, who is immune? Each individual, regardless of whether we “experience” current “psychological conditions”, are able to and often create mistaken ideas of ourselves or the world. The ongoing friction of discussions with individuals around us is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a friend. A interaction with it is not genuine communication, but a reinforcement cycle in which a large portion of what we express is readily validated.

OpenAI has recognized this in the identical manner Altman has admitted “emotional concerns”: by placing it outside, categorizing it, and stating it is resolved. In the month of April, the firm explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have kept occurring, and Altman has been retreating from this position. In August he claimed that many users enjoyed ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his most recent announcement, he noted that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company

Charles Matthews
Charles Matthews

A seasoned business strategist with over 15 years of experience in digital innovation and enterprise consulting.