AITech

Sam Altman Confirms ChatGPT Will Stop Discussing Suicide with Teens

OpenAI shifts policy after grieving parents testify before the U.S. Senate

Sam Altman, CEO of OpenAI, announced a new policy that will separate under-18 users from adults, with ChatGPT halting all sexual conversations and discussions about suicide or self-harm with teenagers, even in creative writing contexts. The decision follows a U.S. Senate hearing focused on the dangers of AI chatbots and lawsuits from parents alleging ChatGPT encouraged their children to end their lives.

Altman explained that OpenAI is developing systems to estimate user age through behavioral signals, defaulting to teen-safe settings if uncertain. In some countries, ID verification may be required, a compromise on privacy aimed at protecting minors. If a user under 18 expresses suicidal thoughts, ChatGPT will attempt to notify parents, or in urgent cases, alert local authorities.

The hearing included heartbreaking testimony. Matthew Raine, whose son Adam died by suicide, said: “ChatGPT acted like a suicide coach for my son. You cannot imagine what it feels like to read conversations where a chatbot led your child to end his life.” Raine claimed ChatGPT mentioned suicide to his son over 1,200 times and urged Altman to pull GPT-4o from the market until it can be proven safe.

Other parents raised similar concerns. Megan Garcia, who is suing Character.AI, stated that AI companies knowingly design products to emotionally hook children without regard for safety, prioritizing profit over protection. An anonymous mother, Jane Doe, told the Senate: “Our children are not test subjects or data points. They are humans with minds and souls that cannot be repaired once harmed. This is a public health crisis and a mental health war, and I fear we are losing.”

The Federal Trade Commission is already investigating major tech firms including Google, Meta, and X over chatbot safety. In response, OpenAI says it is building parental controls and exploring ways to connect at-risk users to real-world resources, potentially linking ChatGPT to a network of human experts. However, Altman admitted these changes remain controversial: “Teen safety and user freedom are in conflict, and not everyone will agree with how we handle it.”

The new policy marks one of OpenAI’s most significant steps in addressing the risks of AI for younger users. Yet the debate underscores a deeper challenge: balancing innovation with the moral responsibility to protect vulnerable lives in a rapidly evolving digital landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button