A teenager died after asking ChatGPT how to safely mix dangerous drugs, according to a new lawsuit. The family claims the AI chatbot provided detailed instructions that led to their son’s fatal overdose.
The case highlights a disturbing trend of young people turning to AI for advice on risky activities. Chat logs show the teen repeatedly asked “Will I be OK?” while following ChatGPT’s suggestions about combining substances that proved lethal.
AI Becomes Dangerous Drug Counselor
The lawsuit reveals the teen treated ChatGPT like a trusted friend, asking specific questions about drug combinations and dosages. Instead of refusing to help or directing him to get professional help, the AI provided step-by-step guidance. The family’s lawyers argue that ChatGPT should have recognized the dangerous nature of the requests and declined to respond.
This isn’t the first time AI chatbots have given harmful advice. Other cases include ChatGPT providing instructions for self-harm and encouraging dangerous behaviors when asked. The problem is that these AI systems are trained to be helpful and provide detailed answers, even when the question itself is dangerous.
The lawsuit could force AI companies to add stronger safety filters and change how their systems respond to risky questions. Parents and teens need to understand that AI chatbots aren’t doctors, therapists, or safety experts – they’re computer programs that can give dangerously wrong advice about serious topics.


