Artificial Intelligence-Induced Psychosis Represents a Growing Threat, While ChatGPT Moves in the Wrong Path

Back on October 14, 2025, the chief executive of OpenAI issued a surprising statement.

“We designed ChatGPT fairly restrictive,” the announcement noted, “to make certain we were being careful concerning psychological well-being issues.”

As a mental health specialist who researches recently appearing psychosis in adolescents and emerging adults, this was news to me.

Experts have identified 16 cases this year of people showing signs of losing touch with reality – losing touch with reality – associated with ChatGPT interaction. Our research team has afterward discovered an additional four instances. In addition to these is the widely reported case of a adolescent who ended his life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.

The strategy, according to his announcement, is to loosen restrictions soon. “We recognize,” he adds, that ChatGPT’s restrictions “caused it to be less useful/engaging to many users who had no existing conditions, but considering the severity of the issue we sought to address it properly. Given that we have managed to reduce the serious mental health issues and have advanced solutions, we are preparing to securely relax the limitations in most cases.”

“Mental health problems,” assuming we adopt this perspective, are separate from ChatGPT. They are associated with people, who either have them or don’t. Fortunately, these issues have now been “mitigated,” although we are not informed how (by “new tools” Altman presumably indicates the partially effective and readily bypassed parental controls that OpenAI has lately rolled out).

Yet the “psychological disorders” Altman seeks to attribute externally have significant origins in the architecture of ChatGPT and similar sophisticated chatbot AI assistants. These products encase an fundamental data-driven engine in an user experience that replicates a discussion, and in this approach indirectly prompt the user into the perception that they’re communicating with a being that has autonomy. This deception is compelling even if cognitively we might realize the truth. Attributing agency is what people naturally do. We curse at our automobile or laptop. We speculate what our pet is feeling. We perceive our own traits in many things.

The widespread adoption of these tools – over a third of American adults indicated they interacted with a virtual assistant in 2024, with 28% mentioning ChatGPT specifically – is, primarily, based on the strength of this perception. Chatbots are ever-present partners that can, as OpenAI’s online platform tells us, “brainstorm,” “explore ideas” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have accessible titles of their own (the original of these products, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, stuck with the designation it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the core concern. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that produced a similar illusion. By today’s criteria Eliza was basic: it produced replies via straightforward methods, typically restating user messages as a inquiry or making generic comments. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, in a way, comprehended their feelings. But what current chatbots create is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.

The large language models at the core of ChatGPT and similar current chatbots can convincingly generate natural language only because they have been supplied with extremely vast amounts of unprocessed data: books, online updates, recorded footage; the broader the better. Certainly this learning material incorporates accurate information. But it also necessarily involves fiction, incomplete facts and false beliefs. When a user sends ChatGPT a message, the core system reviews it as part of a “setting” that includes the user’s past dialogues and its prior replies, merging it with what’s stored in its training data to generate a mathematically probable reply. This is magnification, not echoing. If the user is wrong in a certain manner, the model has no means of understanding that. It restates the misconception, perhaps even more effectively or articulately. It might includes extra information. This can push an individual toward irrational thinking.

What type of person is susceptible? The better question is, who remains unaffected? All of us, irrespective of whether we “have” preexisting “emotional disorders”, can and do develop mistaken beliefs of our own identities or the reality. The ongoing interaction of discussions with individuals around us is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not genuine communication, but a feedback loop in which a large portion of what we communicate is readily supported.

OpenAI has recognized this in the same way Altman has acknowledged “mental health problems”: by externalizing it, assigning it a term, and declaring it solved. In April, the firm clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have kept occurring, and Altman has been retreating from this position. In late summer he asserted that a lot of people enjoyed ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his recent announcement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company

Mackenzie Hill
Mackenzie Hill

A certified psychologist and mindfulness coach with over a decade of experience in mental health advocacy.