Sam Altman Has No Idea What He is Doing
How Sam Altman rode Ilya Sutskever’s transformer bet, alienated his allies, and built a trust crisis that now threatens his only real skill: raising capital and collecting powerful friends.
Introduction
Sam Altman is often introduced as the face of the AI boom, the architect of ChatGPT, the person steering humanity toward artificial general intelligence. The myth is tidy: a visionary founder who saw the future earlier than anyone else, backed it with conviction, and now presides over the most important technology shift since the internet.
The problem with that story is that when you trace who actually made the crucial scientific and engineering breakthroughs, the myth starts to look less like history and more like marketing. OpenAI’s most important technical leaps came from people whose names most users never see on splashy keynotes. Altman has been central to fundraising and to turning all of that into a gigantic company. But if you separate his own contributions from the work of others, you get a very different picture of what kind of leader he actually is, and why his reputation as a liar and deceiver is so dangerous to his one real skill: raising money and collecting powerful friends.
The origin story that got rewritten
OpenAI did not spring fully formed out of Sam Altman’s head. The lab was founded as a nonprofit by a group that included Elon Musk, Ilya Sutskever, Greg Brockman, Altman and several other researchers and engineers. The goal was explicitly to build AI for the benefit of humanity and avoid a world where one or two corporations owned the most powerful systems. Musk has repeatedly claimed in public that he not only co‑founded the lab but came up with the name and provided key early funding. Whatever you think of Musk, that record makes it hard to treat OpenAI as a solo Altman brainchild.
OpenAI’s own early narrative backs that up. The founding vision and funding strategy emerged from a small cluster of people who worried about big tech capture and wanted a different kind of lab. Altman was an important node in that network, especially around money and Silicon Valley relationships, but he was not the lone inventor or sole architect. His background fits that role: a startup founder turned Y Combinator president, an investor and network builder rather than a technical prodigy.
The twist is that this early alliance has now blown up in his face. Musk has gone from patron to sworn enemy, suing OpenAI and denouncing Altman as someone who deceived him about the nonprofit, “open” mission. You do not need to take Musk at his word to see the pattern. One of Altman’s most powerful allies, the kind of billionaire backing he relies on, now presents himself as a victim of Altman’s spin. That is exactly the sort of backfire that happens when your main asset is trust and people decide you abused it.
The transformer revolution Altman did not invent
The core technology that made GPT‑style systems possible is the transformer architecture. It was introduced in 2017 by a Google Brain team in the paper titled “Attention Is All You Need”, which showed that an attention‑only model could beat previous state of the art systems on machine translation while being easier to train at scale. OpenAI did not invent transformers, and Sam Altman certainly did not. The people who did are the authors of that paper, not the CEO of OpenAI.
Inside OpenAI, the person who really grasped how transformative transformers could be was Ilya Sutskever. By all accounts, he is the one who saw that this architecture could be pushed far beyond translation, into a general pattern for building large foundation models. It was Sutskever and his research group who pushed the organization to go all in on transformers, not Altman having some lone insight from the executive suite.
OpenAI’s major contribution was to take that external architecture and, under Sutskever’s technical leadership, push it much further. GPT‑2 and GPT‑3 are essentially very large transformer models trained on huge text corpora. The GPT‑3 paper, “Language Models are Few‑Shot Learners”, is explicit about what the team did: they scaled a transformer to 175 billion parameters and watched performance keep improving. The authors are researchers like Tom Brown, Jared Kaplan, Sam McCandlish and Dario Amodei, plus a long list of engineers. Altman’s name is nowhere on that work because he did not do it.
The conceptual glue behind the “bigger is better” story came from scaling laws research by Kaplan, McCandlish and others, who quantified how loss falls in a smooth way as you increase model size, data and compute. That work is what made it rational to spend sums usually associated with national infrastructure on training runs and still expect returns. Sutskever and the research leadership were the ones who put these pieces together and convinced the organization that transformer plus scale was the right path.
Altman only really enters the transformer story after the breakthroughs. In interviews and internal lore, he recasts the discovery of scaling laws as a kind of founding epiphany and casts himself as someone whose career theme is “scaling” things that work. An engineer who worked with him has described a conversation where Altman gave him a little career sermon about this, saying that scaling is the defining idea of his life. That framing is revealing. Scaling is not a worldview or a scientific insight. It is the phase that begins after someone else has done the conceptual heavy lifting.
That is why even saying Altman “picked up” the transformer idea is too generous. He did not pick up the architecture from the literature and champion it internally. Sutskever and his team did that. Altman arrived once the transformer strategy and scaling laws were already in motion, then surfed the curve the researchers had drawn and retrofitted that curve into a personal philosophy.
When the builders leave and rivals catch up
If Altman’s genius were really about setting the right technical and safety direction, you might expect the people who lived through the transformer plus scale breakthrough at OpenAI to stick around. In reality, many of them left. Anthropic, now one of OpenAI’s fiercest competitors, was founded by Dario and Daniela Amodei along with several former OpenAI staff. The founding team includes authors or key contributors to GPT‑2, GPT‑3 and the scaling laws work.
Over time, more of the early brain trust walked. John Schulman, a co‑founder and major reinforcement learning architect, left for Anthropic. Ilya Sutskever, the same chief scientist who had pushed the lab to go all in on transformers, eventually resigned. Safety leaders like Jan Leike, along with other alignment researchers, quit and publicly complained that the company was prioritizing rapid product launches over the mission of building safe AI. When the people who understand your technology most deeply keep deciding they will have more freedom and integrity elsewhere, that is not a ringing endorsement of the CEO’s direction.
At the same time, the competitive field has shifted. Anthropic’s Claude models have gone from upstart curiosities to genuine frontier contenders, often matching or beating OpenAI’s best on reasoning and coding tasks. Google, slow out of the gate, now trades blows with OpenAI on major benchmarks through its Gemini models. Elon Musk’s xAI has gone from nothing to shipping Grok, a frontier‑class model integrated into his platform ecosystem, in roughly a year. The magic is not that OpenAI holds some secret art that only Altman understands. It is that once the recipe of transformers plus lots of compute is known, multiple well funded labs can run it.
Altman’s real differentiator was never technical insight. It was his ability to turn OpenAI’s head start into a towering pile of capital and partnerships. The whole “raise unbelievable amounts of money and scale” play only made sense if OpenAI could stay the uncontested number one and if investors believed the man at the top was a trustworthy steward. As rivals catch up and insiders leave, both of those assumptions start to wobble.
Governance, spin and the trust problem
That would already be a fragile position. Then the governance crisis hit. In late 2023, OpenAI’s board abruptly fired Altman as CEO, saying he had not been “consistently candid in his communications”. That is a lawyered phrase for a very simple accusation: the board believed he was not telling them the whole truth. After an astonishing internal revolt and pressure from Microsoft, Altman returned within days, and a later review let him reclaim influence. On paper, he won. In reputational terms, the sentence about his candor will follow him for the rest of his career.
Former board member Helen Toner later went public with her side of the story, saying Altman had misled the board about key matters and tried to push her out for writing a policy paper he did not like. Regardless of which side you believe, the picture that emerges is of a CEO who treats governance like another narrative to manage rather than a hard constraint. For an ordinary startup, that might just be more Silicon Valley drama. For a lab that asks the world to trust it with frontier AI, it is something more serious.
The NDA and non‑disparagement mess drove the point home. Reporting revealed that OpenAI used exit agreements that threatened to claw back vested equity if former employees criticized the company. Only after public backlash and scrutiny from journalists and lawmakers did the company move to unwind that language. Altman called the terms embarrassing and said they had not been enforced, but the pattern looked familiar: push aggressive, one sided structures in private, then smooth the story in public once you are caught.
Add to this the open letters from current and former staff warning that internal incentives do not match the company’s public rhetoric on catastrophic risk, and you start to see why critics now describe Altman as fundamentally slippery. In their telling, he is not just a hard charging founder. He is someone whose relationship to the truth is highly negotiable, who will say what each audience needs to hear and clean up the contradictions later.
Even if this verdict feels harsh, the perception itself has consequences. Raising tens or hundreds of billions of dollars is a trust business. So is persuading sovereign leaders, regulators and corporate partners to align their futures with yours. Once you are widely seen as deceptive, the very thing you are good at begins to disintegrate. Deals come with more conditions, more suspicion, or get routed to rivals who look dull but honest. Former allies, Musk most dramatically, recast themselves as people you misled. The mystique flips from asset to liability.
Non-Technical CEO in a Technical Race
Strip away the myth and you are left with a more ordinary picture. Altman is exceptionally good at three things that matter a great deal in the current AI economy: cultivating powerful connections, raising staggering amounts of capital, and telling a story about the future that makes those two activities feel like a moral mission. His tenure at Y Combinator and his ability to pull Microsoft, Nvidia and others into OpenAI’s orbit show that skill set very clearly.
What there is far less evidence for is the idea that Altman has a uniquely deep or coherent view of the technical frontier itself. The transformer architecture came from Google researchers. The empirical scaling laws came from OpenAI scientists like Kaplan and McCandlish. The GPT‑3 breakthrough was executed by a large research team. Ilya Sutskever is the one who saw how transformative transformers could be and pushed the lab to commit to them. Many of those people now work somewhere else. Altman has openly said that he offloads the deep technical decisions to Greg Brockman and the research leads because that is not his strength.
In that sense, he is almost the opposite of CEOs like Dario Amodei at Anthropic or Demis Hassabis at DeepMind. Both are practicing researchers with long records of technical contribution. Hassabis helped drive systems like AlphaGo and AlphaFold and has been treated as a prodigy since long before Google bought DeepMind. Dario Amodei led core work on GPT‑style models and scaling behavior and now sets Anthropic’s technical direction from the top seat. They are technical geniuses in the straightforward sense: they can walk into a research review, interrogate the math and the experiments themselves, and personally reshape the agenda.
When the person in the top seat has that level of mastery, delegating is a choice, not a necessity. In a field that is still scientifically unstable and ethically fraught, having a storyteller CEO who cannot independently evaluate the technical frontier is a very different proposition from having a builder or scientist in that role. Anthropic and DeepMind can plausibly claim that their chief executives are also among their chief scientists. OpenAI cannot honestly say that about Altman. Then, perhaps it is not a coincidence that these labs are now pulling ahead of OpenAI despite starting with less funding and/or far later (both in Anthropic’s case).
So when I say that “Sam Altman has no idea what he is doing”, I do not mean he is stupid or completely ignorant about AI. I mean that his self image as a grand strategist of the AI future collapses under scrutiny. What he mostly does is attach himself to the work of highly capable researchers, bless their discoveries as part of a personal doctrine of “scaling”, and then sell that doctrine to investors and politicians as if it were a philosophy rather than a reaction.
For a while, that act worked. OpenAI really did feel far ahead, and his mystique stayed intact. But as the founders and scientists who built the transformer plus scale regime decamp to rivals, as Anthropic, Google and xAI show they can match or surpass OpenAI, and as board members, employees and former allies openly frame him as deceptive, the gap between narrative and reality has become impossible to ignore. In that gap sits the real question in the title. Not whether Sam Altman is talented, but whether he ever had a deep, grounded idea of what he was doing in the domain that actually matters, and whether his growing reputation for lying and spin is now destroying the only thing he truly brought to the table.



Very well written. Several things stood out for further mulling. This one: "When the people who understand your technology most deeply keep deciding they will have more freedom and integrity elsewhere, that is not a ringing endorsement of the CEO’s direction." seems to be a repeating theme in the world. The celebrity CEO is a relatively new phenomenon. And it overshadows genuine vision, mission, purpose, and value. Thank you.
He's basically a slightly techie sales guy with crazy eyes and I wouldn't let him date my daughter. He's in way over his head. I'm afraid us grassroots folk are going to have to do a lot of cleanup in the years to come.