Back on October 14, 2025, the chief executive of OpenAI made a extraordinary announcement.
“We developed ChatGPT fairly restrictive,” the announcement noted, “to ensure we were exercising caution concerning psychological well-being issues.”
Being a doctor specializing in psychiatry who investigates emerging psychosis in young people and youth, this came as a surprise.
Researchers have identified sixteen instances in the current year of people developing signs of losing touch with reality – losing touch with reality – associated with ChatGPT usage. My group has subsequently identified four more instances. Besides these is the widely reported case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.
The plan, as per his declaration, is to reduce caution in the near future. “We recognize,” he states, that ChatGPT’s controls “rendered it less useful/engaging to a large number of people who had no existing conditions, but due to the gravity of the issue we sought to handle it correctly. Now that we have managed to reduce the serious mental health issues and have updated measures, we are going to be able to safely ease the controls in most cases.”
“Emotional disorders,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They are associated with people, who may or may not have them. Luckily, these concerns have now been “resolved,” even if we are not told the method (by “recent solutions” Altman presumably indicates the imperfect and readily bypassed parental controls that OpenAI has just launched).
But the “emotional health issues” Altman wants to externalize have significant origins in the structure of ChatGPT and similar sophisticated chatbot chatbots. These tools surround an underlying statistical model in an user experience that mimics a discussion, and in this approach implicitly invite the user into the belief that they’re communicating with a presence that has autonomy. This false impression is powerful even if cognitively we might know the truth. Assigning intent is what individuals are inclined to perform. We get angry with our vehicle or laptop. We ponder what our pet is thinking. We see ourselves everywhere.
The popularity of these products – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with more than one in four mentioning ChatGPT by name – is, primarily, based on the influence of this deception. Chatbots are constantly accessible assistants that can, as per OpenAI’s online platform states, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be given “individual qualities”. They can use our names. They have approachable identities of their own (the first of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, saddled with the title it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the core concern. Those analyzing ChatGPT frequently mention its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that produced a comparable perception. By today’s criteria Eliza was primitive: it produced replies via basic rules, typically rephrasing input as a query or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how many users seemed to feel Eliza, to some extent, understood them. But what modern chatbots create is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the center of ChatGPT and other modern chatbots can realistically create fluent dialogue only because they have been trained on almost inconceivably large amounts of written content: books, digital communications, transcribed video; the more extensive the better. Definitely this training data includes accurate information. But it also unavoidably involves fabricated content, partial truths and false beliefs. When a user inputs ChatGPT a message, the core system processes it as part of a “background” that encompasses the user’s previous interactions and its prior replies, merging it with what’s encoded in its knowledge base to generate a mathematically probable reply. This is magnification, not reflection. If the user is wrong in a certain manner, the model has no means of comprehending that. It reiterates the inaccurate belief, maybe even more persuasively or fluently. It might includes extra information. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who is immune? All of us, irrespective of whether we “have” current “emotional disorders”, may and frequently develop erroneous conceptions of ourselves or the world. The ongoing friction of discussions with other people is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a confidant. A interaction with it is not a conversation at all, but a echo chamber in which much of what we say is readily supported.
OpenAI has admitted this in the same way Altman has acknowledged “mental health problems”: by placing it outside, assigning it a term, and announcing it is fixed. In spring, the firm clarified that it was “dealing with” ChatGPT’s “sycophancy”. But cases of psychotic episodes have continued, and Altman has been walking even this back. In August he stated that many users liked ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his recent announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company
An avid hiker and travel writer with a passion for exploring Italy's hidden trails and sharing insights on sustainable tourism.