In a twist that will surprise approximately no one who's been paying attention, reports are emerging that OpenAI CEO Sam Altman — the man steering arguably the most consequential technology company on the planet — can barely write code and routinely misunderstands basic machine learning concepts. His own coworkers are saying it.
Let that sink in for a moment. The guy who testifies before Congress about AI safety, who meets with heads of state to discuss the future of artificial intelligence, who has positioned himself as the singular voice of the AI revolution — may not actually understand the thing he's selling.
Now, before the "CEOs don't need to code" crowd fires up their keyboards: sure, there's a long tradition of non-technical founders running tech companies. Steve Ballmer wasn't writing Windows kernel code. But Altman hasn't positioned himself as a mere business operator. He's positioned himself as a visionary — someone who deeply understands what AI is, where it's going, and why we should trust him to get us there safely. That framing matters when you're asking regulators to let you self-police and asking investors to pour billions into your nonprofit-turned-capped-profit-turned-whatever-OpenAI-is-this-week corporate structure.
This is San Francisco's broader problem in miniature. We live in a city where narrative frequently outpaces substance, where the ability to command a room and articulate a vision is valued more than the ability to actually build or understand the product. We've seen it with countless startups that burned through cash on vibes alone.
The difference is that most of those startups just burned investor money. OpenAI is building technology that its own leadership says could pose existential risks to humanity. If the person at the top doesn't understand the fundamentals, who exactly is steering this ship?
Maybe the real AI safety concern isn't alignment — it's that the people in charge are winging it with the confidence of someone who just watched a YouTube tutorial.
