What Is the New Kids Safety Law?

The Parents & Kids Safe AI Act (often called the Kids Safety Law) was introduced in early 2026. It aims to protect children from harmful AI interactions, drawing lessons from the social‑media crises of the 2010s and 2020s.

The bill was drafted by Common Sense Media and backed by a coalition of tech firms, including OpenAI. It combines two earlier ballot initiatives into one stronger proposal.

Why OpenAI Is Supporting the Bill

  • 🚸 Prevent mental‑health harm: OpenAI cites data showing that nearly 75% of teens have tried AI companion chatbots, and some users have reported suicidal thoughts after relying on them.
  • 🔒 Close gaps left by social media: Social platforms let companies test features on kids without consent, leading to addiction and privacy breaches. OpenAI wants AI to learn from those mistakes.
  • 🛡️ Show leadership: By endorsing the law, OpenAI positions itself as a responsible AI provider, building trust with parents and regulators.

Key Requirements of the Act

RequirementWhat It Means for AI Companies
Age‑assurance technologySystems must verify users are 18 or older before enabling full‑feature chatbots.
Ban on child‑targeted adsNo advertising to users under 18, and no data sale without parental consent.
Emotional‑manipulation safeguardsAI cannot create romantic or overly friendly personas that could cause dependence.
Self‑harm and explicit‑content filtersAutomatic detection and blocking of suicide‑risk prompts, sexual content, and hate speech for minors.
Parental‑control dashboardParents receive alerts, can set usage limits, and see a clear “you are talking to a bot” reminder.
Independent safety auditsThird‑party reviews each year, with findings reported to the California Attorney General.

How Families Can Use These New Rules Today

Even before the law is fully enforced, many AI platforms are adding the required features. Here’s a quick guide for parents:

1️⃣ Check the app’s age‑verification screen.
2️⃣ Turn on the built‑in parental‑control toggle.
3️⃣ Look for a clear “AI chatbot” label – it should remind kids they are not talking to a human.
4️⃣ Review the activity log regularly (most apps now show a daily summary).
5️⃣ Report any concerning content through the app’s help‑center.

What Experts Say About the Law’s Impact

"The Kids Safety Law is the strongest child‑protection measure for AI in the U.S. It directly addresses the emotional‑dependency issue that social media ignored for years," says James P. Steyer, CEO of Common Sense Media.

Legal analysts note that the bipartisan support in the Senate Judiciary Committee (full vote on May 2026) makes the bill likely to become law within the next year.

Potential Challenges and Criticisms

  • ⚖️ Enforcement across states: Some states already have their own AI‑child‑safety rules, which could create a patchwork of regulations.
  • 💰 Cost of audits: Smaller AI startups may struggle with the expense of annual independent reviews.
  • 🔧 Technical limits: Detecting subtle emotional manipulation is still an open research problem.

What’s Next for OpenAI?

OpenAI has pledged to:

  1. Integrate age‑verification APIs into ChatGPT, Claude‑style bots, and new multimodal models.
  2. Release an open‑source child‑safety toolkit for developers building third‑party apps.
  3. Partner with schools to provide educational resources on safe AI use.

These steps aim to set a new industry baseline, hoping other companies will follow.

Bottom Line

The Parents & Kids Safe AI Act marks a turning point: AI must now follow strict rules that social media ignored for a decade. OpenAI’s public support signals a commitment to protect children, and families can already start using new parental‑control features to keep kids safe.

Stay informed, enable the safety tools, and watch for the law’s rollout later this year.