San Francisco is the undisputed capital of the AI revolution, and OpenAI sits at its throne. So when serious questions start swirling about the company's leadership and its approach to policy critics, maybe we should pay attention.

Ronan Farrow — the journalist who took down Harvey Weinstein — just spent a year and a half digging into Sam Altman and came away describing a "pattern of deception" embedded in Silicon Valley's broader "culture of hype." That's not a throwaway line from some random Substack poster. That's an investigative journalist with a Pulitzer essentially saying: the guy steering one of the most powerful technologies in human history has a credibility problem.

Meanwhile, OpenAI's global policy chief Chris Lehane — a veteran political operative, not a technologist — is out publicly dismissing AI safety critics as "doomers" who are "playing with fire." Let that framing sink in. The people raising concerns about existential risk from artificial intelligence are the ones playing with fire? Not the company racing to build it?

This is the kind of rhetorical judo that should make any liberty-minded person deeply uncomfortable. When a company that has already converted from a nonprofit to a capped-profit structure, ousted and reinstated its CEO in a boardroom soap opera, and is now valued north of $150 billion starts calling its critics dangerous — that's not confidence. That's a deflection strategy.

Here's what actually matters for San Francisco: OpenAI employs thousands of people here. Its decisions ripple through our local economy, our housing market, our transit patterns, and increasingly, our city's political landscape. If the leadership of this company is as opaque and self-serving as Farrow's reporting suggests, that's not just a tech story — it's a local governance story.

We don't need to be doomers to ask hard questions. We just need to be adults. The AI industry wants to move fast and break things. Fine. But San Francisco residents deserve to know exactly what's being broken and who's accountable when it does.

Trust but verify has always been good policy. Right now, the "trust" part is looking pretty thin.