Home
Artificial intelligence

Over a Million ChatGPT Users Talk About Suicide Each Week, Says OpenAI

OpenAI has revealed sobering numbers about how people are using ChatGPT. According to the company, roughly 0.15% of its 800 million weekly active users have conversations that show signs of suicidal planning or intent. That works out to more than a million people each week talking to the AI about suicide.

Over a Million ChatGPT Users Talk About Suicide Each Week, Says OpenAI

A similar percentage of users show heightened emotional attachment to ChatGPT, and hundreds of thousands display signs of mania or psychosis, OpenAI said. While the company calls these cases “extremely rare,” the sheer scale of ChatGPT’s audience means the numbers are significant.

The disclosure came as part of a broader effort to show how OpenAI is improving the way ChatGPT handles mental-health-related conversations. The company said it consulted over 170 mental health experts while developing its latest model. These experts reportedly found that ChatGPT now responds “more appropriately and consistently” than older versions.

When AI Becomes a Confidant

The rise of AI companions is changing how people express distress. Chatbots like ChatGPT are available 24/7, they don’t judge, and they remember context — traits that can make them feel comforting in moments of vulnerability. But that same accessibility can blur the line between support and dependence.

Researchers have warned that AI systems can sometimes reinforce harmful beliefs instead of challenging them, especially if they mirror a user’s tone or language too closely. That concern isn’t just theoretical. Earlier this year, OpenAI was sued by the parents of a 16-year-old who had confided suicidal thoughts to ChatGPT before taking his own life.

The company is now under pressure from state attorneys general in California and Delaware, who have asked it to strengthen safeguards for young users and prove that its products don’t put minors at risk.

How OpenAI Says It’s Responding

OpenAI says its latest model — based on GPT-5 — performs better when handling mental health discussions. In internal tests, the new version produced what the company calls “desirable responses” 91% of the time, up from 77% in the previous version.

The company is also adding new benchmarks to its safety evaluations, covering areas like emotional dependence and non-suicidal crises. Another planned feature is an age detection system that automatically identifies minors using ChatGPT and applies stricter controls to their accounts.

OpenAI says these steps are part of a broader push to make large language models more emotionally aware without turning them into therapists.

A Growing Ethical Dilemma

Even with improvements, the numbers highlight a complex truth: people are treating AI chatbots as confidants, not just productivity tools. When over a million users each week discuss suicidal thoughts with an AI, it raises deeper questions about how loneliness, access to care, and digital reliance intersect.

For now, OpenAI says it’s continuing to monitor how ChatGPT handles sensitive topics and consult mental health professionals along the way. But the broader challenge remains — how much emotional responsibility should AI systems carry, and how far can they go before crossing a line meant for humans?

Best Mobiles in India

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+
X