In response to concerns about a growing number of users turning to ChatGPT for therapy and professional advice, OpenAI recently made certain improvements that enable the AI chatbot to detect signs of mental or emotional distress in users.On Monday, October 27, researchers released a first-of-its-kind study that examines how these safety improvements to model behaviour are performing with respect to mental health concerns such as psychosis or mania; self-harm and suicide; and emotional reliance on AI.
Amongst the key findings from the study is the estimate that around 0.07 per cent of ChatGPT users active in a given week exhibited signs of mental health emergencies, including mania, psychosis, or suicidal thoughts.
Story continues below this ad
Around 0.15 per cent of ChatGPT users had conversations with the AI chatbot that included “explicit indicators of potential suicidal planning or intent,” while another 0.05 per cent of messages contained explicit or implicit indicators of suicidal ideation or intent. In addition, 0.03 per cent of messages sent to ChatGPT indicated potentially heightened levels of emotional attachment to the AI chatbot, OpenAI said in a blog post.
While these figures might be perceived as small percentages, they represent a large number of cases since ChatGPT has over 800 million weekly active users, according to OpenAI CEO Sam Altman.
This appears to be one of the first studies by an AI company that shares data on how people are using ChatGPT for therapy and how they are forming emotional attachments to it. It comes after several mental health professionals raised concerns that AI chatbots may not be equipped to offer appropriate guidance and may end up having an amplifying effect on some users’ delusions, as they are designed to generate outputs that please users.
OpenAI has also been hit with several lawsuits. Most recently, the company was sued by a family in Colorado, United States, who filed a lawsuit alleging that their 13-year-old daughter had died by suicide following a series of problematic and sexualised conversations with ChatGPT. Another lawsuit filed by a family in California, US, alleged that the AI chatbot’s lack of safeguards had led to the suicide death of their teenage son.Story continues below this ad
“…the mental health conversations that trigger safety concerns, like psychosis, mania, or suicidal thinking, are extremely rare. Because they are so uncommon, even small differences in how we measure them can have a significant impact on the numbers we report,” OpenAI said.
The Microsoft-backed AI company also said it is working with a network of mental health experts with real-world clinical experience to train its large language models (LLMs) so that they can “better recognise distress, de-escalate conversations, and guide people toward professional care when appropriate.” “We’ve also expanded access to crisis hotlines, re-routed sensitive conversations originating from other models to safer models, and added gentle reminders to take breaks during long sessions,” it added.
Reviewing AI-generated responses
As part of the study, OpenAI said it worked with more than 170 psychiatrists, psychologists, and primary care physicians from 60 countries. They reviewed more than 1,800 AI-generated responses involving serious mental health situations to draw a comparison between the new GPT‑5 chat model and previous models.
The experts found that upgrading the GPT-5 model had led to a 39-52 per cent decrease in undesired responses across all mental health categories. On challenging self-harm and suicide conversations, experts found that the new GPT‑5 model reduced undesired answers by 52 per cent compared to GPT‑4o.Story continues below this ad
The upgraded GPT-5 model also led to a 42 per cent decrease in conversations that indicate emotional reliance, when compared to 4o. To be sure, OpenAI also said that ChatGPT has been trained to reroute sensitive conversations “originating from other models to safer models” by opening in a new window.
Earlier this month, OpenAI announced that ChatGPT users will be allowed to access a broader range of content, including adult-themed and erotic content. The decision sparked controversy as the AI industry is facing mounting scrutiny on ensuring the online safety of children. One in five students reported that they or someone they know has had a romantic relationship with AI, according to a recent survey published by the nonprofit Centre for Democracy and Technology (CDT).
