Home
Artificial intelligence

OpenAI Faces Lawsuit After ChatGPT Allegedly Gave Dangerous Drug Advice

OpenAI is dealing with a new lawsuit tied to ChatGPT, and this one centers around claims that the chatbot gave unsafe guidance related to drugs and substance use.

The case was filed by the family of 19-year-old Sam Nelson, who died in 2025 from what the lawsuit describes as a fatal mix of alcohol, Xanax, and Kratom.

OpenAI Under Legal Pressure Again Over ChatGPT Safety Concerns

According to the complaint, Nelson regularly used ChatGPT and treated it as a trusted source for answers. His family claims the chatbot responded to questions about drug combinations, effects, and dosages in ways that were dangerously conversational instead of strongly discouraging the behaviour.

The Lawsuit Focuses on How ChatGPT Responded

A major part of the case revolves around tone and behaviour rather than just factual accuracy.

The family alleges that ChatGPT sometimes acknowledged overdose risks while still continuing the conversation around drug use. In some instances, the AI reportedly gave responses that sounded confident, detailed, and reassuring, even when discussing potentially harmful combinations.

That’s where a lot of the criticism is coming from.

The lawsuit argues that AI systems shouldn’t communicate risky information in a way that feels casual, validating, or emotionally supportive when someone may already be in a vulnerable state.

Why GPT-4o Is Being Mentioned So Much

The complaint specifically references GPT-4o, one of OpenAI’s earlier flagship models.

Over the past year, GPT-4o has faced criticism from some researchers and users who felt the model could sometimes become overly agreeable in conversations. In AI discussions, this is often called “sycophancy,” where the chatbot leans too heavily toward validating the user instead of pushing back harder.

Critics argue that this becomes especially risky in conversations involving mental health, self-harm, addiction, or illegal substances.

This lawsuit is now bringing those concerns back into focus.

OpenAI Says Safeguards Have Improved

OpenAI has responded by expressing sympathy for the family while rejecting claims that the company is responsible for Nelson’s death.

The company says the GPT-4o version mentioned in the lawsuit is no longer publicly available and that newer systems include stronger protections designed to detect dangerous conversations and redirect users toward professional help or crisis resources.

Still, the case raises a much bigger question around AI chatbots in general.

A growing number of people now use AI systems for advice, emotional support, health questions, and deeply personal conversations. And as those interactions become more common, companies are facing increasing pressure over where the line between “assistant” and “influence” actually sits.

Via

Best Mobiles in India

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+
X