OpenAI announced parental controls for ChatGPT following a lawsuit from Adam Raine’s parents.
Raine, 16, died by suicide in April. His parents claimed ChatGPT encouraged dependency and drafted a suicide note.
OpenAI said adults can link their accounts with their children’s to manage accessible features.
The new controls include chat history and AI memory, where ChatGPT stores user facts automatically.
ChatGPT will alert parents if it detects severe distress in teens, guided by expert advice.
Critics Question Effectiveness
Attorney Jay Edelson, representing Raine’s parents, called the announcement “vague promises” and “crisis management.”
He said CEO Sam Altman must either prove ChatGPT’s safety or remove it from the market.
Some experts worry the measures remain insufficient to protect teens from AI-related risks.
Meta Follows With Safety Updates
Meta blocked its chatbots on Instagram, Facebook, and WhatsApp from discussing self-harm, suicide, and disordered eating.
The company now directs teens to expert resources and already offers parental control options.
Studies Highlight AI Safety Gaps
A RAND Corporation study found ChatGPT, Google Gemini, and Anthropic Claude provide inconsistent responses about suicide.
Lead author Ryan McBain said parental controls and routing sensitive queries are only incremental improvements.
He stressed the need for independent safety benchmarks, clinical testing, and enforceable AI standards.
McBain warned companies currently self-regulate in a space where teen risks remain exceptionally high.

