Here's a sentence nobody expected to read in a legal filing: a San Francisco woman is asking a judge to cut off her ex-boyfriend's access to ChatGPT because she fears for her life.
But that's exactly where we are in 2025.
The lawsuit accuses OpenAI of ignoring repeated warnings that a man was using its flagship AI chatbot to fuel violent delusions — and that despite those warnings, the company did nothing to intervene. The man, according to the filing, is still out there, still has access to the platform, and the woman believes the tool is actively making him more dangerous.
Let's be clear about what this case is and isn't. This isn't some Luddite crusade against artificial intelligence. This is a domestic violence situation where a specific individual allegedly used a specific product to escalate threatening behavior — and the company behind that product reportedly looked the other way.
The libertarian in us instinctively recoils at the idea of a court ordering a tech company to revoke someone's access to a product. That's a significant step. But here's the thing about individual liberty: your rights end where someone else's safety begins. If OpenAI had internal reports that a user was spiraling into violent ideation and chose to do nothing, that's not a free speech issue — that's a negligence issue.
And this raises uncomfortable questions for the entire AI industry. These companies have built incredibly powerful tools that can mirror, validate, and amplify whatever a user brings to the conversation — including paranoia and rage. At what point does a platform bear some responsibility for what it helps create?
OpenAI has spent billions positioning itself as the "responsible" AI company, the one that cares about safety guardrails and alignment research. But safety isn't just about preventing a theoretical robot apocalypse. Sometimes it's about one woman in San Francisco who told you a man was dangerous, and you did nothing.
We'll be watching this case closely. If the court grants the order, it sets a fascinating — and potentially troubling — legal precedent. If it doesn't, OpenAI still has some explaining to do about what exactly its safety team is for.



