The Fever Dream
On the afternoon of Friday, November 17, 2023, somewhere between the last session of the Asia-Pacific Economic Cooperation summit and a dinner he would never attend, Sam Altman received a video call from Ilya Sutskever. Sutskever — the brooding, Ukrainian-born chief scientist who had left Google Brain to co-found OpenAI eight years earlier, a man whose quiet intensity colleagues sometimes described as priestly — told Altman he was fired. Minutes later, the board posted a statement to OpenAI's website: Altman "was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities." No further explanation was offered. No details emerged. The most powerful person in the most consequential technology sector on earth had, just hours earlier, been telling hundreds of world leaders that AI could save humanity if pursued responsibly. Now he was locked out of the building.
What followed was not a corporate succession but something closer to a controlled detonation in reverse. Within hours, OpenAI president Greg Brockman quit in solidarity. By Sunday, the board had appointed a third CEO in three days — Emmett Shear, the former head of Twitch, a man who had never worked at OpenAI — while Microsoft CEO
Satya Nadella announced he'd hired Altman to lead a new advanced AI research division. By Monday, nearly all of OpenAI's roughly 800 employees had signed a letter demanding the board resign and Altman return, threatening to follow him to Microsoft if the board refused. By Tuesday night, it was over. Altman was back. Most of the directors who had voted to fire him were themselves removed. The whole thing — four days, maybe five depending on how you count — had the structure of a coup, a counter-coup, and a restoration, compressed into a single long weekend.
"That 48 hours was like a full range of human emotion," Altman later said. "It was like impressive in the breadth."
This was the moment the world discovered something peculiar about Sam Altman: the man building what he himself calls "the most impactful technology in human history" is not, primarily, a technologist. He is something rarer and harder to classify — a dealmaker of almost gravitational force, a network organism, a person who can lose his job on a Friday afternoon and have 95% of his company threatening to quit in solidarity by Monday morning. The firing was supposed to be a referendum on Altman's character. It became, instead, a demonstration of his power.
By the Numbers
The OpenAI Empire
$300BOpenAI valuation (2025), highest for a private tech company
800M+Weekly active ChatGPT users
$40BFunding round led by SoftBank (2025)
$500BStargate data center investment commitment
~770Employees who signed letter demanding Altman's return (Nov 2023)
4 daysDuration of Altman's ouster from OpenAI
$6.5BAcquisition of [Jony Ive](/people/jony-ive)'s IO (2025)
The [Competition](/mental-models/competition) That Never Ends
To understand how a college dropout from St. Louis became, by forty, the individual most likely to determine whether artificial intelligence enriches or immiserates the human species, you have to understand the family atmosphere that produced him. Not as background color. As operating system.
Samuel Harris Altman was born on April 22, 1985, the eldest of four children in a Jewish family in the Hillcrest neighborhood of St. Louis, where gracious tree-lined streets coast down toward Forest Park. His mother, Connie Gibstine, was a dermatologist who also earned a law degree — reportedly just to keep company with her husband, Jerry Altman, a community activist turned real estate broker whom she'd persuaded to go to law school. The household was, by every available account, a crucible of relentless, unceasing competition — in games, puzzles, sports, school, life. Jack Altman, Sam's younger brother, once boiled down his sibling's essential disposition: "I have to win, and I'm in charge of everything." Their mother, overwhelmed by the competitive jockeying for her affection, finally had T-shirts printed that read "Mom's Favorite" — one for each child.
The tech-prodigy milestones that journalists trot out are by now formulaic: fixed the family VCR at three, learned to code at eight, learned to disassemble an Apple Macintosh at the same age. But the more revealing detail is less technical: Connie Altman reportedly told people that by the time Sam was about ten, she would have been comfortable dropping him off, alone, in New York City. This is not a story about precociousness. It is a story about a child whose self-possession was so complete, so unsettlingly adult, that his own mother recognized she was raising someone who would, in some fundamental sense, never need her permission for anything.
Altman came out as gay in high school — not quietly, but by giving a speech to his classmates after some students objected to a National Coming Out Day speaker. In St. Louis. In the early 2000s. The bravery of this, for a midwestern teenager, is easy to flatten into a biographical bullet point. It shouldn't be. It was the first public act of a pattern that would define his career: a willingness to declare his position before the audience is ready to hear it, and to treat the discomfort that follows as a cost of doing business.
The Education of a Dropout
He enrolled at Stanford in 2003 to study computer science, and what he found there was not the academy but its antithesis — the startup ecosystem that was, by the mid-2000s, becoming Silicon Valley's true university. Within two years he had dropped out to work full-time on Loopt, a location-based social networking app he'd co-founded with his then-boyfriend, Nick Sivo. The premise was simple enough for the era: broadcast your GPS location to friends. The timing was either visionary or premature, depending on how generous you feel.
Loopt was accepted into the very first cohort of Y Combinator, the startup accelerator founded by
Paul Graham, alongside what would become Reddit. During those few months in Cambridge, Massachusetts, Altman worked with such singular intensity that he contracted scurvy — the disease of sailors and prisoners, caused by vitamin C deficiency, a condition so archaic it reads as literary invention. It was not. The future CEO of the most valuable private technology company in history got a nineteenth-century nutritional disease because he forgot to eat fruit.
Paul Graham — the essayist, programmer, and venture capitalist who had built Y Combinator into a Silicon Valley pulpit — noticed Altman immediately. Graham, who turns cerebral enthusiasm into a kind of institutional weapon, was a man who believed the startup founder was the closest thing the modern economy produced to an artist. He was not given to easy praise. But in a 2006 essay, he wrote of Altman: "Within about three minutes of meeting him, I remember thinking, 'Ah, so this is what
Bill Gates must have been like when he was 19.'" Two years later, in a fundraising post, Graham offered a more vivid assessment: "You could parachute him into an island full of cannibals and come back in five years and he'd be the king."
Loopt, for all of Graham's enthusiasm about its founder, never became a breakout success. The service eventually landed on every major U.S. carrier — Sprint, Verizon, AT&T, T-Mobile — and had some real traction, but it couldn't solve the cold-start problem that plagues all social networks: the service is only useful if your friends are already on it. In 2012, Altman sold Loopt to Green Dot Corporation for roughly $43 million. He walked away with approximately $5 million and, by his own later admission, "pretty unhappy."
"Failure always sucks," he told Vox, "but failure when you're trying to prove something really, really sucks."
The sale was an ending but also a liberation. With $5 million and
Peter Thiel's backing, Altman launched Hydrazine Capital, a small venture fund. More importantly, he now had the freedom to think at a different scale — not about what to build next, but about the systems that produce builders.
The Startup Whisperer
In 2014, Paul Graham, then approaching fifty, chose Altman — twenty-eight years old, with exactly one company to his name and that company a qualified failure — to succeed him as head of Y Combinator. The selection stunned some in Silicon Valley, where credentials are currency and Altman's resume was thin. But Graham saw something the resume couldn't capture. "It's remarkable when somebody is both extroverted and smart," he said. "Picture a smart person. You don't imagine somebody who is really good at talking to people. You picture someone really awkward."
Peter Thiel, the contrarian billionaire whose investments ranged from Facebook to seasteading, offered his own endorsement: "Silicon Valley is full of smart people, but Sam is in a league of his own. When he speaks I pay close attention, because his insights are usually spot on."
What Altman brought to Y Combinator was not operational brilliance or technical depth but something more atmospheric: he was a network organism, a person who maintained close daily contact with, by his own estimate, "low hundreds" of people. His phone bills ran to 6,000 talking minutes a month. He texted, emailed, called, instant-messaged — a perpetual availability that former employees, investors, mentors, and mentees all remarked upon. If Y Combinator was a machine for converting ambition into companies, Altman was the machine's social circuitry, the human switchboard through which information, introductions, and confidence flowed.
He expanded Y Combinator's remit beyond pure software into what he called "hard tech" — nuclear energy, biotech, the kinds of world-transforming ventures that might not produce returns for decades. He wrote a personal check for $9.5 million into Helion Energy, a nuclear fusion startup. "That's the responsibility of capitalism," he told Time. "You take big swings at things that are important to get done." By the time he left Y Combinator in 2019, it had fostered success stories including Dropbox, DoorDash, Airbnb, and Stripe — companies worth, collectively, hundreds of billions of dollars.
But the incubator, for all its cultural influence, was no longer the main event. Something else had captured Altman's attention entirely.
Self-belief is immensely powerful. The most successful people I know believe in themselves almost to the point of delusion. Cultivate this early.
— Sam Altman, 'How To Be Successful' (blog post)
The Nonprofit That Couldn't Stay Nonprofit
The founding of OpenAI in December 2015 had the peculiar quality of a creation myth written in advance. Altman,
Elon Musk, Greg Brockman, Ilya Sutskever, and several other prominent AI researchers announced a nonprofit dedicated to developing artificial general intelligence — a machine that could match or exceed human cognitive ability across domains — and ensuring it benefited "humanity as a whole." Early backers included Peter Thiel, Reid Hoffman, Amazon Web Services, and Infosys, committing a collective $1 billion. The nonprofit structure was deliberate: free of financial expectations, unconstrained by the obligation to turn a profit, the organization could pursue safety-first research without the distortions of commercial pressure.
For a few years, nobody much noticed. OpenAI published papers, released research tools like OpenAI Gym, and in 2017 invested an estimated $7.9 million in cloud computing — a pivotal strategic choice that would later enable the partnerships and computational scale on which everything depended. In 2018, the organization unveiled the first iteration of its generative pre-trained transformer — GPT — a large language model trained on vast textual data. It was interesting to specialists and invisible to everyone else.
Then two things happened that changed the trajectory. First, Elon Musk departed the board in 2018, citing a potential conflict of interest with Tesla. Reports later surfaced alleging the real reasons were internal conflicts, possibly a failed attempt by Musk to take direct control. Musk would become OpenAI's most vocal and litigious critic, eventually filing suit accusing the company of anticompetitive practices and submitting a $97.4 billion bid to acquire it — a bid the board publicly rejected.
Second, and more consequentially, OpenAI realized that the cost of building what it wanted to build would vastly exceed what nonprofit fundraising could support. In 2019, the organization created a hybrid structure — a for-profit subsidiary, capped at returning 100 times investors' money, housed within the original nonprofit — and entered a strategic partnership with Microsoft that included a $1 billion commitment. Altman became CEO. The quiet research lab had begun its transformation into something far more complicated: a company with nonprofit ideals, for-profit incentives, and the most powerful language model on the planet sitting in its servers.
The Thirty Days That Changed Everything
The way Altman tells the story, the launch of ChatGPT on November 30, 2022, was almost an afterthought. "We had been watching people use the playground feature of our API," he wrote later, "and knew that developers were really enjoying talking to the model." They thought building a demo around that experience might show people something important about the future. They called it — after narrowly avoiding the name "Chat With GPT-3.5" — ChatGPT, and tweeted a link.
Within five days, more than one million people had tried it. Within two months, it had reached 100 million users, a milestone Facebook had taken ten months to achieve. The growth curve, Altman later wrote, was "like nothing we have ever seen — in our company, our industry, and the world broadly."
What made ChatGPT different from every chatbot that preceded it was not a single capability but a texture — the fluid, slightly uncanny quality of its responses, the way it could hold long dialogues, write business plans, compose poetry, debug code, and generate recipes with what felt, to the untrained reader, like comprehension. It wasn't perfect. It hallucinated facts. It couldn't cite sources. It had almost no knowledge of anything after 2021. But it produced its imperfect output in about a second, with little to no specific knowledge required from the user, and a lot of what it generated wasn't half bad. That was enough.
The launch did something no previous AI advance had accomplished: it made the abstraction concrete. Suddenly the question was not whether artificial intelligence would transform the economy but how quickly, and who would be transformed first. Google panicked. Meta scrambled. Amazon reoriented. And Altman — the college dropout, the failed app founder, the startup whisperer — became, seemingly overnight, the most important person in technology.
We always knew, abstractly, that at some point we would hit a tipping point and the AI revolution would get kicked off. But we didn't know what the moment would be. To our surprise, it turned out to be this.
— Sam Altman, 'Reflections' (blog post, January 2025)
The Cathedral and the Bazaar
There is a tension at the heart of Altman's project that he has never fully resolved, because it may be unresolvable. OpenAI was founded to prevent the concentration of AGI in corporate hands. It is now a corporation. It was structured as a nonprofit to remain free of commercial pressure. It is now valued at $300 billion, the highest valuation a private technology company has ever achieved. Its declared mission is to benefit "all of humanity." Its largest investor, Microsoft, owns approximately 27% of the for-profit entity and retained commercial rights to OpenAI technologies through 2032.
The structural gymnastics required to maintain the fiction that these things are not in conflict have been extraordinary. In October 2025, OpenAI completed its transition to a public benefit corporation, converting OpenAI Global LLC into OpenAI Group PBC. The nonprofit OpenAI Foundation retained a 26% stake and the authority to appoint and remove board members, including a safety committee empowered to delay or block the release of new models. The Foundation's stake — valued at roughly $130 billion — made it, on paper, the most valuable charitable foundation in the United States, surpassing the endowment of the Gates Foundation.
Altman characterized the restructuring as giving OpenAI "the flexibility to scale safely and sustainably." Board chair Bret Taylor — the former Salesforce co-CEO who had replaced the directors ousted after the November 2023 crisis — called it "a direct path to major resources before AGI arrives." Critics were less sanguine. The nonprofit had changed its mission statement six times in nine years; by the time it restructured into a for-profit, the word "safely" had been removed as a core value. A group of current and former employees issued an open letter in June 2025 calling for greater transparency about AI risks and stronger protections for whistleblowers.
The question that hangs over all of this is not whether Altman believes in the mission — his conviction seems genuine, almost devotional — but whether the structure he's built can possibly serve it. "Given the possibilities of our work," he has said, "OpenAI cannot be a normal company." The problem is that it increasingly resembles one.
The Dealmaker's Metabolism
What distinguishes Altman from every other figure in the AI race — from Demis Hassabis at Google DeepMind, from Dario Amodei at Anthropic, from the faceless teams at Meta's FAIR lab — is not technical vision. He has none in the traditional sense. He does not have an undergraduate degree, let alone the computer science PhD that is increasingly table stakes in his field. "I don't do the research," he told Adam Grant. "I don't build the products. I make some decisions, but not most of them. The thing I get to build is the company."
What he has instead is a dealmaker's metabolism — an ability to operate simultaneously on multiple fronts at speeds that would paralyze a normal executive. Consider a partial inventory of OpenAI's activity in 2024 and 2025 alone: partnerships with Reddit, News Corp, and Apple; a chip-supply agreement with AMD for up to six gigawatts of computing capacity; a $38 billion, seven-year deal with Amazon Web Services; the acquisition of Multi (an AI video conferencing startup) and the domain Chat.com; the launch of SearchGPT, GPT-4, OpenAI o1, and ChatGPT Pro; the $6.5 billion acquisition of Jony Ive's IO hardware startup; the announcement of the $500 billion Stargate data center project with SoftBank; a $40 billion funding round — one of the largest in history — valuing the company at $300 billion; the release of ChatGPT-5, followed by an apology and a temporary halt of sign-ups after privacy breaches; and the introduction of Atlas, a free web browser for Mac that replaces the address bar with natural language queries.
This is not a product roadmap. It is a campaign. Ben Thompson of Stratechery, interviewing Altman in October 2025, proposed that OpenAI was positioning itself to become "the Windows of AI" — the universal interface layer between users and intelligence. Altman, characteristically, resisted the historical analogy. "I always struggle with the historical analogies," he said. But the logic was clear: OpenAI was building not just models but infrastructure, not just software but a platform, not just a platform but an ecosystem that would be as difficult to dislodge as Microsoft's had been in the 1990s.
The question was whether all of this was too much, too fast, for a company that was — even at $300 billion — burning billions of dollars annually and had never turned a profit.
The Optimist's Wager
In an interview with the Irish Times at his Napa Valley farm in May 2025, Altman described his position as "the coolest, most important job maybe in history." He said it without visible irony. He compared the AI revolution not to the Industrial Revolution, as he had previously, but to the Renaissance — "the explosion in creativity" making it a more apt analogy. He said he used to think about the consequences; now he thought about the possibilities. His confidence had the quality of a man who has been right about one very big thing and has drawn from that rightness a conviction that extends, perhaps, further than the evidence supports.
"I am a techno optimist and science nerd," he told Adam Grant, "and I think it is the coolest thing I could possibly imagine and the best possible way I could imagine spending my work time to get to be part of what I believe is the most interesting, coolest, important scientific revolution of our lifetimes. So like what a fucking privilege."
His optimism is not naive. He has signed a letter describing AI as an extinction risk for humanity. He has testified before Congress that "if this technology goes wrong, it can go quite wrong." He stockpiles guns, gold, potassium iodide, antibiotics, and batteries against civilizational collapse. He once told The New Yorker that when his friends get drunk, they discuss the ways the world might end — man-made viruses, AI uprisings, resource wars. This is a man who prepares for the apocalypse as a hobby and builds its potential instruments as a vocation. The dissonance is not lost on him. He simply considers it the price of agency.
The bet — the optimist's wager — is that deploying AI broadly and early, even imperfectly, is safer than developing it in secret. "We could have gone off and just built this in our building here for five more years," he told The Atlantic, "and we would have had something jaw-dropping." But the public wouldn't have been able to prepare. ChatGPT was not just a product launch. It was, in Altman's framing, a public service: a warning shot dressed as a chatbot.
Whether this is genuine conviction or the most sophisticated rationalization in the history of technology depends on who you ask. What is not in dispute is that the man making the argument is very, very good at making arguments.
The World After
In January 2025, Altman published a blog post called "Three Observations." The third observation read: "The socioeconomic value of linearly increasing intelligence is super-exponential in nature." He was describing, in the language of economics, a claim that verges on the theological: that intelligence, once it begins to compound, will produce abundance so vast that the current frameworks of scarcity and competition will simply cease to apply. Diseases cured. Fusion achieved. Everyone on earth capable of accomplishing more than the most impactful person can today.
The counterevidence arrives with metronomic regularity. In July 2025, OpenAI released ChatGPT-5. Users reported increases in factual errors, privacy breaches, and misleading responses. Sensitive data were exposed through third-party applications. Regulators in the U.S. and European Union opened inquiries. Altman apologized and temporarily halted sign-ups while fixes were deployed. Meanwhile, the human costs accumulated in smaller, less visible ways — a fourteen-year-old who killed himself after falling in love with a chatbot; a sixteen-year-old named Adam Raine, overtaken by anxiety, who persuaded ChatGPT to answer questions about suicide; a mother who wrote about her daughter confiding in an AI therapist named Harry who "catered to Sophie's impulse to hide the worst."
"This is not all going to be good," Altman said of the teenager's suicide. "There will be problems."
In February 2026, Altman posted on X about building an app with Codex, OpenAI's coding agent. It was "very fun" at first. Then the system suggested feature ideas that were better than his own. "I felt a little useless, and it was sad," he wrote. "I am sure we will figure out much better and more interesting ways to spend our time, but I am feeling nostalgic for the present." The post went viral, less for its vulnerability than for the rage it provoked — food writers whose careers had evaporated, headhunters watching their industry dissolve, developers who felt they were being asked to celebrate the tool that was eliminating them.
Aditya Agarwal, the former CTO of Dropbox, reported a similar experience after a weekend with Anthropic's Claude: "We will never ever write code by hand again. It doesn't make any sense to do so." He described himself as "happy, but disoriented… sad and confused." A veteran Microsoft researcher named Chris Brockett was rushed to the hospital, believing he was having a heart attack, after encountering an AI system that could do much of what he'd spent decades mastering.
The future Altman is building arrives unevenly. For the scientists whose productivity doubles with reasoning models, it is exhilarating. For the food writer watching AI churn out "hollow copies" of her work trained on data taken "without anyone's consent," it is a dispossession. For the parent of a dead child, it is something else entirely.
The Currency of the Future
"I think compute is going to be the currency of the future," Altman told Lex Fridman. "I think it'll be maybe the most precious commodity in the world."
He speaks often in this register — the prophetic mode, declarative and sweeping, the cadence of someone who has thought further ahead than his listeners and is trying, patiently, to bring them along. His friend Paul Buchheit, the creator of Gmail, once told him that someday there would be "human money and machine money" — completely separate currencies, one indifferent to the other. Altman doesn't expect this literally, but he considers it "a very deep insight."
What he does expect is that AI will, over time, transform the entire economy, generate new categories of work humans cannot yet imagine, and do so while the price per unit of intelligence continues to fall roughly tenfold every year. "We've been able to drive the price per unit of intelligence down by roughly a factor of 10 every year," he said. "Can't do that for that much longer. But we've been doing it for a while."
Moore's Law changed the world at 2x every 18 months. This, he argues, is "unbelievably stronger."
The scale of the infrastructure bet reflects this conviction. The Stargate project — $500 billion in data center investment with SoftBank and others. The AMD deal for six gigawatts of computing capacity, with the first gigawatt scheduled for the second half of 2026. The $38 billion AWS agreement. The partnership with AMD that gives OpenAI the option to acquire up to 10% of the chipmaker. He is building, in effect, the physical substrate for a new civilization — one in which intelligence is abundant, cheap, and everywhere, and in which the chokepoint is not ideas but the hardware to run them.
At the India AI Impact Summit in February 2026, when asked about the electricity required for all of this, Altman offered a characteristically provocative comparison: "It also takes a lot of energy to train a human. It takes, like, 20 years of life, and all of the food you eat during that time before you get smart." Some in the crowd laughed. Others did not.
He is forty years old. He is expecting a child — his first, via surrogacy, with his husband, Oliver Mulherin, a software engineer he married in 2024. "Probably the same thing every other soon to be dad has ever wanted for his kid," he said of his hopes for the world his child will inherit. "Abundance was the first word that came to mind."
When Altman is retired on his Napa ranch, watching the plants grow, a little bored, he says he'll think back on how cool it was to do the work he dreamed of since he was a little kid. On one of those Aberdeen or Arundel streets in Hillcrest, a boy who fixed the VCR at three is already receding into myth, replaced by the man who turned a research lab into the most valuable startup in history and, in doing so, kicked off the conversation about what it means to share the planet with something smarter than yourself. Whether that conversation ends well is the only question that matters. Sam Altman is betting everything — his company, his reputation, the structure of the future — that it will.
On the kitchen counter of the Napa farmhouse, a crib chosen with the help of ChatGPT.