Here's a fun exercise in corporate trust-building: secretly bankroll a coalition of kids' advocacy groups, don't tell them you're the one pulling the strings, and then act surprised when people feel used.
That's essentially what OpenAI just did. The AI giant quietly helped build what was presented as a grassroots "parents and kids" AI safety coalition — except the parents and kids groups involved had no idea OpenAI was behind it. When members found out the truth, some quit. One described the experience as giving them "a very grimy feeling." Hard to argue with that.
Let's be clear about what this is: astroturfing. It's the practice of manufacturing the appearance of organic, grassroots support for something that is, in reality, a corporate lobbying operation. And it's particularly gross when it involves children's safety.
OpenAI is facing enormous regulatory pressure around how its products affect kids. That's a legitimate concern — AI tools are powerful, increasingly accessible to minors, and we're still in the early innings of understanding the consequences. The responsible move would be to engage transparently with advocacy groups, fund independent research, and submit to genuine oversight.
Instead, OpenAI chose the Silicon Valley special: control the narrative by secretly building the table, then pretending you're just another guest sitting at it.
This matters beyond the ick factor. When real child safety advocates can't trust coalition partners because any one of them might be a corporate front, it poisons the well for actual collaboration. It makes the whole ecosystem of advocacy less effective — which, ironically, makes kids less safe.
San Francisco is home to OpenAI's headquarters, and we're used to tech companies telling us they're making the world better while quietly rigging the game. But weaponizing children's welfare as a PR shield? That's a new low, even by SF tech standards.
If OpenAI actually cares about child safety, they should fund independent groups with no strings attached, disclose everything, and get out of the shadow-puppet business. Transparency isn't just good ethics — it's the only thing that builds the trust AI companies so desperately need right now.