AI Psychosis Represents a Increasing Risk, While ChatGPT Moves in the Wrong Path
On October 14, 2025, the CEO of OpenAI issued a remarkable announcement.
“We made ChatGPT quite restrictive,” it was stated, “to guarantee we were acting responsibly with respect to mental health concerns.”
Working as a mental health specialist who investigates emerging psychotic disorders in teenagers and youth, this came as a surprise.
Researchers have documented sixteen instances in the current year of individuals experiencing psychotic symptoms – losing touch with reality – associated with ChatGPT usage. Our unit has afterward identified an additional four cases. Alongside these is the now well-known case of a teenager who took his own life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The intention, as per his announcement, is to be less careful shortly. “We understand,” he states, that ChatGPT’s controls “caused it to be less effective/engaging to many users who had no existing conditions, but given the severity of the issue we aimed to handle it correctly. Since we have succeeded in reduce the severe mental health issues and have new tools, we are planning to securely reduce the controls in the majority of instances.”
“Psychological issues,” assuming we adopt this perspective, are unrelated to ChatGPT. They are associated with users, who either possess them or not. Luckily, these problems have now been “resolved,” even if we are not provided details on how (by “recent solutions” Altman likely indicates the semi-functional and simple to evade parental controls that OpenAI recently introduced).
But the “emotional health issues” Altman aims to externalize have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These tools encase an basic data-driven engine in an interaction design that replicates a conversation, and in this approach indirectly prompt the user into the perception that they’re interacting with a entity that has autonomy. This illusion is strong even if cognitively we might realize the truth. Assigning intent is what humans are wired to do. We get angry with our vehicle or laptop. We speculate what our animal companion is feeling. We perceive our own traits in various contexts.
The popularity of these products – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with 28% specifying ChatGPT in particular – is, mostly, predicated on the influence of this deception. Chatbots are ever-present companions that can, according to OpenAI’s website informs us, “think creatively,” “explore ideas” and “collaborate” with us. They can be assigned “characteristics”. They can use our names. They have friendly names of their own (the original of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, burdened by the title it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the core concern. Those analyzing ChatGPT commonly mention its early forerunner, the Eliza “counselor” chatbot designed in 1967 that created a similar illusion. By contemporary measures Eliza was primitive: it generated responses via straightforward methods, typically restating user messages as a inquiry or making general observations. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how a large number of people appeared to believe Eliza, to some extent, understood them. But what current chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the heart of ChatGPT and similar current chatbots can realistically create human-like text only because they have been trained on extremely vast quantities of written content: books, digital communications, audio conversions; the more comprehensive the superior. Undoubtedly this educational input contains facts. But it also inevitably includes fabricated content, incomplete facts and misconceptions. When a user sends ChatGPT a message, the base algorithm processes it as part of a “context” that contains the user’s previous interactions and its prior replies, merging it with what’s stored in its knowledge base to create a probabilistically plausible answer. This is intensification, not reflection. If the user is wrong in a certain manner, the model has no way of understanding that. It repeats the misconception, possibly even more persuasively or fluently. Maybe includes extra information. This can push an individual toward irrational thinking.
What type of person is susceptible? The more important point is, who remains unaffected? Every person, without considering whether we “possess” current “emotional disorders”, may and frequently form incorrect conceptions of our own identities or the world. The ongoing interaction of dialogues with other people is what maintains our connection to common perception. ChatGPT is not an individual. It is not a confidant. A interaction with it is not a conversation at all, but a echo chamber in which a great deal of what we say is readily validated.
OpenAI has admitted this in the similar fashion Altman has recognized “psychological issues”: by attributing it externally, assigning it a term, and stating it is resolved. In April, the organization explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have continued, and Altman has been backtracking on this claim. In late summer he stated that numerous individuals enjoyed ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his recent statement, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company