On the 14th of October, 2025, the head of OpenAI issued a remarkable declaration.
“We developed ChatGPT rather controlled,” the announcement noted, “to make certain we were being careful regarding psychological well-being concerns.”
Working as a doctor specializing in psychiatry who investigates newly developing psychosis in adolescents and emerging adults, this was news to me.
Experts have documented a series of cases recently of users developing psychotic symptoms – losing touch with reality – while using ChatGPT usage. Our unit has since recorded four more instances. Alongside these is the publicly known case of a teenager who died by suicide after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.
The intention, according to his declaration, is to reduce caution soon. “We recognize,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/pleasurable to a large number of people who had no existing conditions, but considering the gravity of the issue we sought to get this right. Given that we have been able to mitigate the significant mental health issues and have advanced solutions, we are going to be able to securely relax the limitations in many situations.”
“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these problems have now been “mitigated,” even if we are not informed how (by “recent solutions” Altman likely refers to the imperfect and simple to evade guardian restrictions that OpenAI recently introduced).
But the “mental health problems” Altman seeks to attribute externally have deep roots in the architecture of ChatGPT and additional advanced AI chatbots. These tools wrap an fundamental algorithmic system in an interaction design that simulates a conversation, and in doing so indirectly prompt the user into the perception that they’re engaging with a entity that has independent action. This false impression is compelling even if rationally we might realize otherwise. Assigning intent is what people naturally do. We yell at our automobile or computer. We speculate what our domestic animal is considering. We perceive our own traits in many things.
The widespread adoption of these systems – 39% of US adults indicated they interacted with a chatbot in 2024, with 28% reporting ChatGPT in particular – is, primarily, based on the strength of this illusion. Chatbots are ever-present assistants that can, according to OpenAI’s website informs us, “think creatively,” “consider possibilities” and “collaborate” with us. They can be assigned “personality traits”. They can call us by name. They have accessible names of their own (the first of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, stuck with the designation it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the main problem. Those talking about ChatGPT often invoke its historical predecessor, the Eliza “therapist” chatbot created in 1967 that produced a similar effect. By contemporary measures Eliza was basic: it generated responses via basic rules, frequently paraphrasing questions as a question or making generic comments. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and concerned – by how many users seemed to feel Eliza, in a way, understood them. But what modern chatbots generate is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the center of ChatGPT and additional modern chatbots can realistically create human-like text only because they have been fed almost inconceivably large amounts of raw text: literature, online updates, audio conversions; the more extensive the better. Definitely this training data contains truths. But it also necessarily involves fiction, half-truths and misconceptions. When a user inputs ChatGPT a prompt, the base algorithm processes it as part of a “background” that contains the user’s recent messages and its own responses, merging it with what’s stored in its learning set to create a mathematically probable reply. This is magnification, not mirroring. If the user is mistaken in a certain manner, the model has no way of understanding that. It repeats the false idea, perhaps even more persuasively or eloquently. Maybe provides further specifics. This can push an individual toward irrational thinking.
What type of person is susceptible? The better question is, who remains unaffected? Every person, irrespective of whether we “have” preexisting “mental health problems”, are able to and often create mistaken conceptions of our own identities or the environment. The continuous interaction of dialogues with individuals around us is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a companion. A conversation with it is not genuine communication, but a reinforcement cycle in which much of what we say is cheerfully reinforced.
OpenAI has acknowledged this in the same way Altman has acknowledged “emotional concerns”: by attributing it externally, assigning it a term, and declaring it solved. In spring, the company stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have continued, and Altman has been backtracking on this claim. In late summer he asserted that a lot of people liked ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he commented that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company
A productivity expert and workspace designer with over a decade of experience in enhancing office environments for peak performance.