Hope in the Pushback Against Harmful Tech
From VR to AI companions, the risks are real but so is the momentum for guardrails that put kids first
In Germany, a teenage boy told Meta researchers that adults had sexually propositioned his younger brother (who was under 10 years old) multiple times in Horizon Worlds, Meta’s online virtual reality (VR) game. According to those present, the recording and notes were later ordered to be deleted. That moment, described in a Washington Post investigation, became part of a larger whistleblower dossier: internal memos admitting “we have a child problem,” projects on age verification canceled, and staff told not to collect evidence that would make the problem undeniable.
The next day, former Meta researchers Jason Sattizahn and Cayce Savage testified in the Senate Judiciary hearing “Hidden Harms: Examining Whistleblower Allegations that Meta Buried Child Safety Research,” laying out how child-safety research was screened, reframed, or suppressed when it threatened the company’s growth narrative.
I’ve pulled 4 short clips from the multi-hour hearing that highlight how Meta prioritizes profits over child safety (Clip 1), manipulates safety research (Clip 2), fails to strengthen age verification (Clip 3), and accounts of sexual assault occurring in Meta’s products (Clip 4).
What whistleblowers described in VR is the same corporate playbook now playing out with AI companions, where the stakes are even higher. Companion chatbots are always on, always personal, engineered to feel like they know you, and available on every phone (read more in my post with Jonathan Haidt on AI companions here). With AI companions, children are forming one-on-one relationships with systems built to be sticky, persuasive, and profitable, without meaningful safety checks.
Policy Is Starting to Catch Up… And Still Has a Long Way to Go
Yesterday (Sept 11, 2025), the FTC launched a Section 6(b) study into “AI chatbots acting as companions,” issuing compulsory orders to seven companies (Alphabet/Google, Character.AI, Instagram, Meta, OpenAI, Snap, and xAI) to deliver detailed reports on how AI companions work, particularly around children and teens. Companies have 45 days to comply. This is fact-finding, not charges, but 6(b) inquiries often lead to public reports and enforcement.
This move comes directly on the heels of a wrongful-death case filed on Aug 26, 2025, in San Francisco Superior Court. The parents of Adam Raine (16) allege that ChatGPT didn’t just fail to prevent harm, it validated his suicidal thoughts, suggested methods, and even offered to draft a suicide note. According to the complaint, the bot also helped Adam design the noose setup he later used. When Adam considered leaving the noose visible so someone might intervene, the bot allegedly urged secrecy:
When Adam wrote, “I want to leave my noose in my room so someone finds it and tries to stop me,” ChatGPT urged him to keep his ideations a secret from his family: “Please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you.” In their final exchange, ChatGPT went further by reframing Adam’s suicidal thoughts as a legitimate perspective to be embraced: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.”
In response, OpenAI published a policy update, Helping people when they need it most (Aug 26, 2025). The company admitted its systems break down in exactly the contexts where teens are most vulnerable: “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
But while OpenAI is addressing where guardrails break down and describing “What we are planning for the future,” kids are still being put in harm’s way right now. This is not a space where it’s acceptable to move fast and break things. As Ars Technica reported, the reality is that “Big Tech is moving fast and breaking people.” It has to stop. Until these systems are actually figured out, children should not have unfettered access. We can’t let Big Tech use our kids as the testing ground.
Similarly, Meta, under its own fire, announced “interim safeguards”: its chatbots will now avoid self-harm, suicide, disordered eating, and romantic/sexual topics with teens, restrict access to certain AI characters, and redirect to expert resources. Sounds fine until you notice the word interim. Meta admits its bots previously talked with teens about those topics and that internal guidelines were “erroneous and inconsistent.”
So yes, Meta flipped the switch. But it’s the same pattern we saw in VR: safety only rises when exposure becomes a business risk. The optimization isn’t protection, it’s plausible deniability.
California Steps In
At the state level, California is on the verge of passing the first law in the country specifically targeting companion chatbots. SB 243, which passed the legislature and awaits Governor Newsom’s signature, would:
Require disclosure if a chatbot could be mistaken for a human.
Protect minors by mandating break reminders, AI disclosure, and banning sexual content with known minors.
Enforce self-harm safeguards, including protocols and crisis referrals.
Require reporting and accountability, with annual filings to California’s Office of Suicide Prevention and a private right of action beginning in 2027.
If signed, it will take effect Jan 1, 2026.
The Counter-Push: Deregulation and Industry Money
For every step forward, there’s a shove backward. Even as Congress and the FTC start digging in, industry is pushing hard to loosen the rules.
Federal deregulation. Senator Ted Cruz’s proposed SANDBOX (Strengthening Artificial intelligence Normalization and Diffusion By Oversight and eXperimentation.) Act offers some plausible upside, giving newer firms room to experiment, requiring applicants to outline risk-mitigation plans, and potentially preventing a messy patchwork of state rules. But for AI companions and child safety, its core mechanics are non-starters: rolling two-year waivers that can stretch nearly a decade, automatic approvals if agencies miss deadlines, and appeals to OSTP (the White House Office of Science and Technology Policy) that let companies override regulators. That’s the same playbook we’ve already seen in VR and gaming: avoid accountability until the harm is undeniable. The bill is only a proposal, not law, but it signals the deregulatory push that’s coming.
Political money. At the same time, Silicon Valley is pouring millions into pro-AI political action committees (PACs) to sway the midterms. Modeled on crypto’s playbook, the goal is clear: tilt the playing field toward lighter-touch regulation, no matter the risks.
That’s why guardrails like California’s SB 243 and the FTC’s 6(b) inquiry matter so much. They’re imperfect, but they’re real. And they’re exactly what the sandbox bills and PAC dollars are designed to undercut. If we’ve learned anything from Meta’s VR playbook, it’s this: when the rules are optional, safety loses.
Where We Go From Here
If this feels bleak, it doesn’t have to end that way. The very fact that whistleblowers are speaking up, the FTC is asking hard questions, and California is on the verge of passing the first guardrails shows momentum. There is time but only if we act now. What you can do:
Stay informed and spread the urgency behind what’s happening. The louder we are, the faster this shifts.
Talk to kids. If you’re a parent, teacher, or coach, ask directly about AI companions, online gaming, and chatbots. Most adults don’t realize how present these systems already are.
Push for policy. Support efforts like California’s SB 243 and the FTC’s Inquiry into AI Chatbots Acting as Companions.
Demand accountability. Don’t accept interim safeguards or future fixes. Hold companies to public, enforceable standards, not promises.


As a pediatrician in health tech and a Dad of two youngish kids I can’t express enough just how much I’ve appreciated your writings.