The student claims the school relied on an AI-detection tool to accuse him of cheating, then punished him based on that finding. The problem? AI-detection tools are notoriously unreliable. Study after study has shown these programs produce false positives at alarming rates, particularly for non-native English speakers and students who write in a clear, structured style. In other words, writing well can get you flagged.

Let's be clear about what's happening here: a public institution used an unproven technology to levy a serious accusation against a minor, with real consequences for his academic record and reputation. That's not innovation — that's institutional laziness dressed up as rigor.

Schools across the country have been scrambling to respond to the rise of ChatGPT and similar tools, and the panic is understandable. Academic integrity matters. But the answer to AI-generated work can't be more half-baked AI. When a school substitutes algorithmic suspicion for actual evidence — a teacher's judgment, a conversation with the student, a comparison with prior work — it's outsourcing accountability to software that isn't up to the task.

The civil rights angle of this lawsuit is worth watching closely. If schools are disproportionately flagging certain students — whether by race, language background, or writing style — we're looking at algorithmic discrimination laundered through an academic integrity policy.

This case should make every Bay Area parent uncomfortable. Today it's a sophomore in Palo Alto. Tomorrow it could be your kid, branded a cheater by a probability score no one in the administration fully understands. If we're going to hold students accountable for their work, the least we can do is hold schools accountable for how they make those determinations.

Due process isn't a suggestion. It's the point.