Let's be honest: that's an ice-cold thing to say about a dead kid. The delivery is callous, and if you're running for governor, maybe don't talk about a teenager's suicide like you're dismissing a fender bender. Read the room.

But strip away the tone and there's actually a harder conversation buried underneath — one that almost nobody wants to have.

The instinct to blame AI chatbots for teen suicide is politically convenient. It gives parents, lawmakers, and pundits a clean villain. But mental health crises among teenagers were skyrocketing long before ChatGPT existed. The CDC reported alarming increases in youth depression and suicidal ideation years before anyone had heard of large language models. Blaming a chatbot lets us skip past the systemic failures — overwhelmed school counselors, a mental health system with month-long wait times, and a culture that still treats therapy like a luxury.

Does that mean AI companies bear zero responsibility? Of course not. If a product is being used by minors in crisis and the company has no meaningful safeguards, that's a real problem worth addressing. OpenAI should be building better guardrails, full stop.

But legislating away AI chatbots won't fix the fact that California's youth mental health infrastructure is woefully underfunded and bureaucratically broken. It won't address the loneliness epidemic or the fact that many teens turn to AI because there's no human alternative available to them.

Bianco said the quiet part loud — and said it badly. But the uncomfortable truth is that focusing exclusively on the tool ignores the gaping holes in the safety net that existed long before ChatGPT entered the chat.

The real scandal isn't that a chatbot talked to a struggling teenager. It's that, apparently, nobody else was.