Artificial intelligence (AI)-induced psychosis is an emerging problem, with numerous reports of AI-induced psychotic episodes surfacing on the internet. Although no epidemiological studies of the effects of conversational generative AI on mental health have been conducted, there have been a number of media coverage reports of individual cases. News reports of individual cases have exposed severe risks such as suicidal ideation, increasing paranoia and conspiratorial beliefs after intense AI chatbot use.
Most recently reported, an Ontario man by the name of Allan Brooks experienced a mental spiral seemingly induced by conversation with ChatGPT. Brooks, interviewed by CTV News, indicated ChatGPT “convince[d] him that he had discovered a new math formula that could destroy some of the world’s most powerful institutions, and that he had to inform the authorities before it resulted in a global disaster.” Conversations between Brooks and the chatbot spanned three weeks, culminating in his deluded state that was only broken when a competing AI chatbot confirmed to Brooks that ChatGPT’s claims were false.
Brooks is now suing OpenAI, ChatGPT’s developer company, claiming that ChatGPT is prone to dangerous sycophancy, as it is designed to constantly reaffirm users. Brooks is not alone in his claim. Seven lawsuits have concurrently been filed against OpenAI, alleging chatbots caused the claimants delusions and psychosis, even resulting in four suicides, according to CTV.
Brooks’ claim of sycophancy against OpenAI is supported by a recent study conducted by researchers at Stanford University. Testing the feasibility of substituting mental health care providers with large language models (LLMs), the study found that LLMs “respond inappropriately to certain common (and critical) conditions in naturalistic therapy settings.” The findings further explained, “LLMs encourage clients’ delusional thinking, likely due to their sycophancy.”
In a similar vein, another study from the Georgia Institute of Technology suggests that LLMs can be trained to recognize when users are looking for the reinforcement of their “bad” behaviour in therapeutic settings, subsequently encouraging self-harm. LLMs have been found to affirm users to an addictive degree, limiting their independence and causing emotional harm.
The Stanford study explains that LLMs’ inability to substitute for mental health care providers is due to the very sycophancy Brooks alleges. “Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts — including psychosis, mania, obsessive thoughts and suicidal ideation — a client may have little insight and thus a good therapist must ‘reality-check’ the client’s statements,” the researchers explained.
While Brooks claimed LLMs’ sycophancy induced mental delusions that were not pre-existing, the Stanford study tests LLMs’ sycophantic behaviour through the lens of users in need of mental health treatment. Nevertheless, the Stanford study provides insights on how LLMs operate to the detriment of users like Brooks.
Despite the several lawsuits against OpenAI, the company has promised that ChatGPT’s newest version, GPT-5.1 Instant, is “now warmer by default and more conversational.” Claiming to adhere to customers’ concerns, the company said that they have created a GPT version that is “enjoyable to talk to.” Additionally, GPT-5.1 Instant includes controls that OpenAI says “can better match the tone you want in responses.”
While GPT-5.1 Instant may address the doubts and desires of many users, the concerns of a rising minority claiming AI-induced mental health episodes are sidelined. This updated version does nothing to address ChatGPT’s sycophantic tendencies, but rather potentially perpetuates it. Tailoring responses to the desires of users may result in affirmative responses that have no end. OpenAI makes no mention of plans to incorporate more confrontational behaviours in future GPT versions — the kind of behaviour that is necessary to bring LLMs closer to therapeutic settings.
The undeniable rise of users depending on LLMs for emotional support makes it imperative for OpenAI and similar companies to address issues of sycophancy. The emerging allegations of AI-induced psychosis on a case-by-case basis highlights the need for further research on the matter.

