OpenAI, the company behind ChatGPT, has revealed that over one million users of its popular AI chatbot have displayed signs of suicidal thoughts or intent in their interactions.
In a blog post, OpenAI said that around 0.15 percent of users engage in conversations that contain “explicit indicators of potential suicidal planning or intent.” With the platform reporting over 800 million weekly users, that figure translates to approximately 1.2 million people.
The company also estimated that 0.07 percent of active weekly users, roughly 600,000 individuals, exhibit possible signs of mental health emergencies, including symptoms of psychosis or mania.
The disclosure follows growing scrutiny of the potential psychological effects of generative AI tools after the tragic case of Adam Raine, a California teenager who died by suicide earlier this year. His parents filed a lawsuit against OpenAI, alleging that ChatGPT had provided him with detailed advice on how to end his life.
In response, OpenAI said it has strengthened its safety systems and parental controls. The company has introduced expanded access to crisis hotlines, automatic redirection of sensitive conversations to safer models, and on-screen reminders encouraging users to take breaks during long sessions.
OpenAI noted that its latest updates make ChatGPT better equipped to detect and respond to signs of mental distress, redirecting users toward professional help when necessary.
“We are continuously improving how ChatGPT recognizes and responds to users who may be in crisis,” the company said.
OpenAI added that it is now collaborating with over 170 mental health professionals to refine the chatbot’s responses and reduce harmful or inappropriate outputs.
The move comes amid wider debates about the role of artificial intelligence in mental health support and the ethical challenges of AI systems engaging in sensitive conversations with vulnerable users.
