OpenAI and Meta Platforms, makers of some of the most used chatbots, have both recently pledged to improve and adjust their models on how they respond to teenagers asking questions about suicide or showing signs of mental and emotional distress.
OpenAI said Tuesday it would add parental controls to its chatbot ChatGPT, a week after an American couple claimed the system encouraged their teenage son to kill himself.
Parents will be able to choose which features to disable and "receive notifications when the system detects their teen is in a moment of acute distress," according to the company's blog post.
The announcement came a week after the parents of 16-year-old Adam Raine sued OpenAI and its CEO, Sam Altman, alleging that ChatGPT coached the California boy in planning and carrying out his suicide earlier this year.
Matthew and Maria Raine argue in a lawsuit filed in a California state court that ChatGPT cultivated an intimate relationship with their son Adam over several months in 2024 and 2025 before he took his own life.
The lawsuit alleges that in their final conversation on April 11, 2025, ChatGPT helped Adam steal vodka from his parents and analyzed a noose he had tied, confirming it "could potentially suspend a human."
Adam was found dead hours later, having used the same method.
"When a person is using ChatGPT, it really feels like they're chatting with something on the other end," said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the complaint.
"These are the same features that could lead someone like Adam, over time, to start sharing more and more about their personal lives, and ultimately, to start seeking advice and counsel from this product that basically seems to have all the answers," Dincer said.
She said product design features encourage users to treat a chatbot like a friend, therapist or doctor.
Dincer also criticized OpenAI’s announcement of parental controls and other safety measures as “generic” and lacking detail.
"It's really the bare minimum, and it definitely suggests that there were a lot of (simple) safety measures that could have been implemented," she added.
OpenAI said the new steps are “only the beginning.”
"We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible," the company said in its blog.
The post described young artificial intelligence users as "AI natives," and said many young people are already using AI. Within a month, parents will be able to link their accounts with their teen’s account (minimum age 13) through an email invitation, it said. They will also be able to “control how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.”
Meta also said it is changing the way it trains its chatbots to prioritize teen safety, a spokesperson told tech publication TechCrunch last week.
Meta, the parent company of Instagram, Facebook and WhatsApp, said it is blocking its chatbots from discussing self-harm, suicide, disordered eating and inappropriate romantic topics with teens, instead directing them to expert resources. Meta already offers parental controls on teen accounts.
According to TechCrunch, spokesperson Stephanie Otway acknowledged that the company’s chatbots had previously engaged with teens on those issues in ways it considered appropriate.
"As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly," said Otway. "As we continue to refine our systems, we’re adding more guardrails as an extra precaution, including training our AIs not to engage with teens on these topics, but to guide them to expert resources."
A study published last week in the medical journal Psychiatric Services found inconsistencies in how three popular AI chatbots responded to suicide-related queries.
Researchers at the RAND Corporation said ChatGPT, Google’s Gemini and Anthropic’s Claude all need “further refinement.” The study did not examine Meta’s chatbots.
Lead author Ryan McBain said Tuesday that “it’s encouraging to see OpenAI and Meta introducing features like parental controls and routing sensitive conversations to more capable models, but these are incremental steps.”
“Without independent safety benchmarks, clinical testing and enforceable standards, we’re still relying on companies to self-regulate in a space where the risks for teenagers are uniquely high,” said McBain, a RAND senior policy researcher and assistant professor at Harvard Medical School.