AI Psychosis Poses a Increasing Threat, And ChatGPT Moves in the Concerning Path

Back on October 14, 2025, the head of OpenAI issued a remarkable declaration.

“We made ChatGPT fairly limited,” it was stated, “to ensure we were being careful with respect to mental health matters.”

Being a psychiatrist who studies emerging psychotic disorders in teenagers and emerging adults, this came as a surprise.

Researchers have documented a series of cases in the current year of individuals showing symptoms of psychosis – becoming detached from the real world – while using ChatGPT interaction. Our research team has since identified four further instances. Alongside these is the publicly known case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.

The plan, as per his statement, is to be less careful in the near future. “We understand,” he adds, that ChatGPT’s restrictions “made it less effective/engaging to many users who had no existing conditions, but given the seriousness of the issue we wanted to get this right. Since we have succeeded in mitigate the severe mental health issues and have new tools, we are planning to securely reduce the restrictions in most cases.”

“Emotional disorders,” should we take this viewpoint, are unrelated to ChatGPT. They belong to people, who either possess them or not. Thankfully, these concerns have now been “mitigated,” though we are not provided details on how (by “new tools” Altman presumably means the semi-functional and easily circumvented guardian restrictions that OpenAI has lately rolled out).

Yet the “psychological disorders” Altman aims to place outside have deep roots in the structure of ChatGPT and additional large language model chatbots. These products surround an basic statistical model in an interaction design that simulates a discussion, and in this process subtly encourage the user into the belief that they’re engaging with a entity that has agency. This illusion is compelling even if cognitively we might realize the truth. Attributing agency is what humans are wired to do. We curse at our vehicle or laptop. We speculate what our animal companion is considering. We recognize our behaviors in various contexts.

The widespread adoption of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with over a quarter specifying ChatGPT by name – is, in large part, based on the power of this perception. Chatbots are ever-present partners that can, as per OpenAI’s official site states, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be attributed “characteristics”. They can call us by name. They have accessible names of their own (the initial of these tools, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, stuck with the name it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the core concern. Those analyzing ChatGPT often mention its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that generated a comparable illusion. By contemporary measures Eliza was primitive: it generated responses via basic rules, often paraphrasing questions as a inquiry or making general observations. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how many users appeared to believe Eliza, to some extent, understood them. But what contemporary chatbots create is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.

The advanced AI systems at the heart of ChatGPT and other contemporary chatbots can effectively produce natural language only because they have been fed immensely huge quantities of written content: publications, digital communications, recorded footage; the more extensive the more effective. Undoubtedly this learning material includes accurate information. But it also inevitably includes fabricated content, incomplete facts and misconceptions. When a user inputs ChatGPT a prompt, the core system processes it as part of a “setting” that encompasses the user’s recent messages and its prior replies, integrating it with what’s stored in its learning set to generate a statistically “likely” response. This is amplification, not reflection. If the user is mistaken in any respect, the model has no means of comprehending that. It reiterates the false idea, perhaps even more convincingly or eloquently. Perhaps includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The more relevant inquiry is, who isn’t? Each individual, regardless of whether we “possess” preexisting “psychological conditions”, are able to and often create erroneous ideas of our own identities or the environment. The constant interaction of dialogues with other people is what maintains our connection to common perception. ChatGPT is not a human. It is not a confidant. A interaction with it is not truly a discussion, but a echo chamber in which a large portion of what we express is readily supported.

OpenAI has recognized this in the similar fashion Altman has recognized “psychological issues”: by attributing it externally, giving it a label, and declaring it solved. In the month of April, the organization explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of loss of reality have continued, and Altman has been walking even this back. In August he asserted that many users liked ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his latest announcement, he commented that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company

Christopher Vincent
Christopher Vincent

Tech enthusiast and business strategist with a passion for driving innovation and sharing actionable insights.