The Disease in the Dish
In 2004, the John D. and Catherine T. MacArthur Foundation called Daphne Koller to inform her she had been selected for its fellowship — the so-called "genius grant," an unrestricted $500,000 awarded to individuals of "exceptional creativity." Koller's reaction was not triumph. It was something closer to vertigo. "I'd always had a very aspirational definition of what genius means," she would later confess. "That was
Albert Einstein or
Leonardo da Vinci. It wasn't me." She was thirty-six, already the holder of a named chair in Stanford's Computer Science Department, already the author of foundational work in probabilistic graphical models that would reshape how machines reason under uncertainty. She had received the Presidential Early Career Award, the Sloan Fellowship, the IJCAI Computers and Thought Award. None of it had dislodged the conviction that she was, at root, an academic — the daughter of academics, destined to retire as a professor emeritus, "like my father." The MacArthur changed something. Not immediately, and not in the way the foundation likely intended. "I felt very humbled and unworthy," Koller has said, "and one might even say that much of my career journey following the MacArthur Award was an attempt to kind of pay it back, to prove myself as having deserved that." This is the through-line of her career: not genius in the Romantic sense, but a relentless, almost anxious compulsion to justify the gifts she has been given — gifts of intellect, of institutional position, of being in precisely the right place when the world's problems became soluble — by directing them at the largest possible canvas. Education. Disease. The machinery of biological life itself.
The canvas she chose last, and perhaps most consequentially, is a company called insitro — founded in 2018, headquartered in South San Francisco, and premised on a deceptively simple idea: that the reason drug development fails so catastrophically, with clinical trial success rates hovering in the single digits and the cost of bringing a single new therapy to market exceeding $2.5 billion, is not that we lack smart scientists or good intentions. It is that we lack the right data. And that the way to generate the right data is to build, at industrial scale, what Koller calls "disease in a dish" — cellular models of human illness, produced by the millions, read by machines at a resolution no human eye could match, and interpreted by the same machine learning techniques that Koller has spent three decades refining.
It is, in other words, an attempt to do for biology what calculus did for physics: provide a mathematical framework that makes predictions possible in a domain that has, for centuries, resisted them.
By the Numbers
Daphne Koller
300+Refereed publications across Science, Cell, Nature Genetics, NeurIPS, ICML
150+h-index
$743M+Total funding raised by insitro, including $400M Series C in 2021
162M+Learners reached by Coursera worldwide
18Years on Stanford's Computer Science faculty
1,272Pages in Probabilistic Graphical Models, the field-defining textbook
4National Academies memberships (NAS, NAE, AAAS, AAAI)
The Third-Generation PhD
To understand the scope of Koller's ambition, you have to understand how improbable it was for someone formed entirely within the cloister of the academy to leave it — not once, but three times, each departure more radical than the last.
She was born on August 27, 1968, in Jerusalem, Israel. She describes herself as a "third-generation PhD," and the phrase is not throwaway biographical color; it is an identity claim, a declaration of the waters she swam in. Her parents were academics. The expectation — which she internalized so completely that it functioned as gravity — was that she would study, publish, teach, accumulate honors, and eventually reach the terminus of an intellectual life well lived: professor emeritus. The trajectory was clear before she could articulate it.
Koller was precocious in the way that, in retrospect, seems foreordained. She completed her Bachelor's degree in Mathematics and Computer Science at the Hebrew University of Jerusalem in 1985. She was seventeen. Her Master's, also from Hebrew University, followed in 1986, at eighteen. Before pursuing her doctorate, she served as a lieutenant in the Israel Defence Forces from 1986 to 1989 — a period she rarely discusses in interviews, though the discipline and compression of those years surely left their mark. She arrived at Stanford in the fall of 1989 to begin her PhD under the supervision of Joseph Halpern, working on the problem that would become her life's obsession in various guises: how to reason rigorously about things you don't know for certain.
Her dissertation, From Knowledge to Belief, won the Arthur L. Samuel Award for best thesis in Stanford's Computer Science Department in 1994. She then decamped to UC Berkeley for a postdoctoral fellowship with Stuart Russell — one of the world's foremost AI researchers, whose own textbook on artificial intelligence would become the field's standard reference. Russell, who had grown up in the United Kingdom before training at Stanford and settling at Berkeley, shared with Koller an appreciation for the foundational: the conviction that if you got the mathematics right, the applications would follow.
In September 1995, Koller returned to Stanford as an assistant professor. She was, by her own account, "the first machine learning hire into Stanford's computer science department." This is a detail worth pausing on. In 1995, you could not say you were "doing AI" without inviting skepticism; the field was in the grip of one of its periodic winters, and machine learning was still viewed by many as, in Koller's memorable phrasing, "fringe" — or, as her teenage daughter would later put it, "sus." The department's orientation was toward logic, formal methods, the symbolic tradition that had dominated AI since the 1960s. Koller was hired to pull it toward something else entirely: toward probability, toward data, toward the messy, uncertain real world.
The Language of Uncertainty
The work that made Koller's academic reputation, and that continues to underlie everything she has built since, is in probabilistic graphical models — a framework for representing and reasoning about complex systems under uncertainty. The core insight is deceptively elegant: most real-world problems involve vast numbers of interrelated variables, and the relationships between them are not deterministic but probabilistic. A patient's genome, their diet, their inflammatory markers, their exposure history — these are not isolated facts but nodes in a dense, tangled graph of conditional dependencies. The question is not whether variable A causes outcome B, but how likely outcome B is, given what we know (and don't know) about A, C, D, and everything else.
Koller, working with Nir Friedman — an Israeli computer scientist who had trained at Hebrew University before joining its faculty — spent years developing the theoretical foundations and practical algorithms for this framework. Friedman, who shared Koller's background in Jerusalem's mathematical culture and her appetite for moving between abstraction and application, was the ideal collaborator. Their partnership culminated in
Probabilistic Graphical Models: Principles and Techniques, published by MIT Press in July 2009 — a 1,272-page monument to a way of thinking about the world that would prove far more consequential than its sales figures might suggest.
The book is not light reading. It spans Bayesian networks, Markov networks, dynamic models, hybrid models, algorithms for exact and approximate inference, methods for learning structure and parameters from data, and frameworks for causal reasoning and decision-making. It is, in Kevin Murphy's assessment from the University of British Columbia, "likely to become a definitive reference for all who work in this area." It did. But more importantly, it codified an intellectual stance — a way of insisting that uncertainty is not a bug to be patched over but the fundamental condition of interesting problems — that would become the foundation for Koller's later ventures into education and medicine.
The research group she built at Stanford, known with characteristic dry humor as DAGS (Daphne's Approximate Group of Students), pushed these ideas into increasingly ambitious territory: object-oriented Bayesian networks for handling hierarchical structure, probabilistic relational models for reasoning about databases, factored Markov decision processes for sequential planning, multi-agent influence diagrams for incomplete-information games. The applications spread across computer vision, robotics, natural language processing, and — with growing intensity — computational biology and medicine.
With most drugs, we do not understand why they work.
— Daphne Koller, Lex Fridman Podcast #93
Two projects from this period would prove prophetic. Working with a neonatologist, Koller developed models to predict survival rates for premature infants — babies so small they fit in a human hand, weighing less than 200 grams — using only non-invasive bedside monitor data: heart rate, respiratory rate, oxygen saturation. The models could make useful predictions from just the first few days of a baby's life. Separately, with a PhD student who was also a pathologist, she built machine learning systems to analyze tumor images from breast cancer patients, predicting five-year survival rates. The critical finding: the features most predictive of survival were not in the tumor cells themselves, as pathologists had assumed for a century, but in the surrounding tissue — what would later be recognized as the tumor microenvironment. "Our paper was one of the earliest pieces of evidence supporting that," Koller has noted, with the understated pride of someone who knows the finding has since been validated by an entire subfield.
These were not side projects. They were rehearsals.
The Accidental Entrepreneur
The origin story of Coursera has been told often enough that it has acquired the smooth patina of myth. In 2009 or 2010, Koller began to feel that her Stanford students were not receiving the best learning experiences in increasingly crowded classes. She and her colleague Andrew Ng — a British-born, Hong Kong–raised AI researcher who had joined Stanford's faculty in 2002 and would later lead Google Brain and Baidu's AI division — decided to experiment with a "flipped classroom" approach: putting lecture material online so that in-person time could be devoted to interaction and problem-solving.
Ng, who was simultaneously teaching a machine learning course, offered it for free online. One hundred thousand people enrolled. To match that number at Stanford, he would have had to teach the same class for 250 years. The implication was staggering — and, for Koller, it was also personal. "I am one of the lucky people," she told the audience at TEDGlobal in Edinburgh in June 2012. "Most people, of course, are not." She described a stampede at a South African university earlier that year: thousands of people lined up to secure a place, twenty injured, a woman killed trying to get her son into school. Even in the United States, she noted, tuition was rising at twice the rate of healthcare costs, while only a little over half of recent graduates were working in jobs that required a college degree.
The conviction that crystallized for Koller in those months was one she would frame, characteristically, through someone else's words. Quoting Thomas Friedman's New York Times column: "Big breakthroughs happen when what is suddenly possible meets what is desperately necessary."
In 2012, Koller and Ng founded Coursera. She took a leave of absence from Stanford — intended to last two years. "I was absolutely petrified," she later admitted. "Not only had I never founded a company, my career journey was such that I'd never even been at a company." The confession is striking from someone who, by that point, had received nearly every honor her field could bestow. The MacArthur had expanded her sense of obligation; it had not entirely banished the imposter syndrome. The experience of Coursera would.
The company grew with a velocity that surprised even its founders. By the time Koller gave her TED Talk, Coursera had 640,000 students from 190 countries, who had viewed 14 million videos and submitted 6 million quizzes. She updated her slide deck a week before the talk to reflect 1.4 million enrollments; by the day of the talk, it was already 1.5 million. Within a year, TIME Magazine named Koller and Ng among the 100 Most Influential People in the World. TechCrunch called Coursera the "Best Startup of 2012." The New York Times declared it "the year of the MOOC."
But the hype was built on a misunderstanding. The press assumed MOOCs would destroy universities. Koller had never made that claim. "That isn't the right way to look at it," she told Knowledge@Wharton in 2014. "Our target audience is people who are primarily working adults and are not currently candidates for traditional forms of education." The backlash, when it came, was based on the same misunderstanding. Critics pointed out that most MOOC enrollees already held college degrees, that completion rates were low, that the revenue model was unclear. Robert Meister, president of the Council of University of California Faculty Associations, wrote a barbed open letter proposing that Koller teach a course called "The Implications of Coursera's For-Profit Business Model for Global Public Education."
Koller's response, typically, was to keep building. By 2014, Coursera had over 10 million users. By the time she stepped away from operational leadership, the platform had partnered with more than 100 universities and would eventually reach over 162 million learners. It went public on the NASDAQ in 2021.
What Koller took from the experience was not primarily the lessons of startup management — the importance of hiring experienced leaders earlier, the criticality of deliberate culture-building, the dark side of a company's strengths taken to extremes — though she learned all of those. What she took was a deeper conviction about the relationship between data and learning. Coursera had given her, for the first time, the ability to observe at massive scale how people actually learned — not in theory, not through a controlled experiment with a sample size of forty, but through millions of interactions, each one a data point. She could see where students stumbled, what kind of retrieval practice worked, how peer grading, properly incentivized, correlated with instructor grading. "It caused me to completely reshape the way I thought about teaching," she said. "And to think, 'Why should the learner care?' which is not a question I would ask myself before."
The phrase is revealing. For two decades, Koller had been building mathematical frameworks for reasoning under uncertainty. Now she had built a system that generated the data to make such reasoning practical — at a scale the academy had never imagined. The question was where to apply that insight next.
The Alphabet Detour
In 2016, Koller left Coursera and, in her words, "raised my head up over the trenches for the first time" since the company's founding. What she saw was that AI was transforming the world — but barely touching the life sciences. "I felt like one of the main reasons for that is that there just aren't that many people who spoke both languages," she told the Regeneron Genetics Center's podcast. "And I was in the privileged position having spent a large chunk of my career in each of those two disciplines."
She joined Calico Labs — the Alphabet subsidiary created by Google in 2013 to study aging and age-related diseases — as its first Chief Computing Officer. Calico was run by Art Levinson, the former CEO of Genentech and chairman of Apple's board, a molecular biologist turned executive who had built one of the biotech industry's most successful companies. Koller's role was to bring computational methods to bear on the deep biology Calico was pursuing.
She stayed less than two years. But the experience was formative. At Calico, Koller got her first real exposure to the machinery of drug discovery and development — and to its staggering inefficiency. She saw firsthand the chasm between the computationalists, who believed their models could solve anything, and the biologists, who knew how little was actually understood about the systems they were trying to manipulate. She learned the vocabulary, the timelines, the regulatory constraints. And she learned what frustrated her: wasted effort. The pharmaceutical industry, she came to believe, was running experiment after experiment without the computational infrastructure to learn systematically from failure — each drug program essentially starting from scratch, with little knowledge carrying over from the success or failure of the one before.
The personal dimension was also present, though Koller is characteristically restrained in discussing it. Her father suffered from an autoimmune condition. "The experience of seeing him go through that — and seeing the limitations of what medicine could offer — that definitely informed my interest," she told the HLTH Matters podcast. The motivation was not grief but exasperation: the sense that the tools to do better existed, or nearly did, if only someone would assemble them correctly.
A Company Built on the Right Data
Insitro — the name a play on "in silico" and "in vitro," computation and biology — was founded in 2018 with a thesis that was, by Silicon Valley standards, almost perversely patient. The problem with AI in drug discovery, Koller argued, was not that the algorithms were insufficiently powerful. It was that the data they were trained on was garbage. Or, more precisely, it was data that had been generated for other purposes — clinical records, academic studies, genomic databases — and was therefore full of confounders, biases, missing values, and irrelevant noise. "A related assumption is that the data we have already collected — text and images from the web — contain all the answers that we need," Koller told the Observer in 2025. "But the data that we need to disentangle biology and derive truly novel insights mostly does not exist yet. We need to generate the right data — data that is fit-for-purpose for machine learning."
This was the founding insight of insitro: to build, from the ground up, a "bio-data factory" — an automated laboratory capable of generating massive, high-quality datasets specifically designed to be interpretable by machine learning. The company takes human stem cells, reprograms them into disease-relevant cell types (neurons, liver cells, cardiac cells), subjects them to genetic perturbations or chemical treatments, and then reads the results at high throughput using imaging, gene expression profiling, and other modalities. The output is not a handful of data points from a low-powered academic experiment. It is millions of data points, structured and annotated at a resolution that allows machine learning models to discern patterns no human pathologist could see.
The company raised $100 million in Series A funding from investors including GV (formerly Google Ventures), Andreessen Horowitz, and Bezos Expeditions. By 2021, it had raised $400 million in a Series C round, bringing total funding to over $743 million — making it one of the most well-funded AI biotech companies in the world.
What I really wanted to build was a company that rethought drug discovery and development from the ground up, using machine learning as a foundational tool.
— Daphne Koller, HLTH Matters Podcast
The strategic partnerships followed. A collaboration with Gilead Sciences targeted nonalcoholic steatohepatitis (NASH), a liver disease that affects tens of millions worldwide and for which no effective drug existed. The deal included up to $1 billion in milestones. A $25 million collaboration with Bristol Myers Squibb focused on genetic targets for ALS — amyotrophic lateral sclerosis, the neurodegenerative disease that, as Fortune's Diane Brady noted, is "heartbreaking to see." A partnership with Moorfields Eye Hospital in London aimed to build AI foundation models for neurodegenerative eye diseases.
What distinguishes insitro from the dozens of other "AI for drug discovery" companies that have proliferated since 2018 is not the algorithms — the machine learning techniques it uses are largely known — but the insistence on generating its own data. Most competitors take existing biological datasets and try to extract signal from noise. Koller's argument is that this approach is fundamentally limited, because the signal isn't in the noise. It's in experiments that haven't been run yet, on systems that haven't been built yet, at scales that haven't been attempted yet.
"This is not a niche technology," Koller has said. "It's going to be like computers — you're going to use it in every place, and the value of the technology will be limited primarily by your imagination of where it can be deployed."
The Reclassification of Disease
One of Koller's most provocative arguments — and the one that, if validated, could have the most sweeping implications — is that the fundamental categories of human disease are wrong. Not slightly wrong. Ontologically wrong.
"Fifteen years ago, you had breast cancer," she told the Possible podcast. "Now you no longer have breast cancer because it's not one disease. You might have a BRCA positive breast cancer, or you might have a HER2+ breast cancer, or a triple-negative. And each of those has a different therapeutic intervention that has much greater efficacy because it ties back to some underlying core biology — which is different in different people."
The implication, she argues, extends far beyond oncology. NASH is not one disease. ALS is not one disease. Alzheimer's is not one disease. What we call "diseases" are often crude symptom clusters — the medical equivalent of calling every vehicle that moves on four wheels a "car." The reason clinical trials fail at such catastrophic rates is not that the drugs don't work. Some of them probably do work — but only for a specific biological subtype of the disease, a subtype that gets diluted and lost when averaged across an entire trial population.
Insitro's approach is to use machine learning to discover these subtypes — to identify, from the high-dimensional data generated by its bio-data factory, the underlying biological signatures that distinguish different forms of what we currently treat as a single disease. If successful, this could reshape not just drug development but the very taxonomy of human illness.
"You can only do that with a lot of very granular data," Koller argues. "And the only way to interpret that granular data is with the tools of AI because the human mind just simply cannot encompass that complexity."
The claim is bold, possibly hubristic, and — given her track record — impossible to dismiss.
Exponential Curves and AI Winters
Koller has lived through multiple cycles of AI hype and despair, and this history has made her simultaneously more optimistic and more cautious than many of her peers. She graduated with her PhD in 1993, during one of AI's periodic winters. "When I graduated with a PhD in a field that wasn't really AI at the time, but is now all of AI — which is the field of machine learning — you weren't allowed to say you were doing AI," she recalled. "It was kind of like fringe." At Berkeley, where she did her postdoc, the preferred euphemisms were "cognitive computing" or "statistical learning theory."
She watched the field's fortunes reverse, slowly and then all at once. The deep learning revolution of the 2010s, powered by GPUs and massive datasets, vindicated the probabilistic and data-driven approaches she had championed for decades. But Koller is careful to distinguish between domains where AI has achieved transformative results — natural language processing, image recognition, game-playing — and domains where the data infrastructure does not yet exist.
"I think we've been living on an exponential curve for multiple decades," she told Eric Topol in 2024, "and the thing about exponential curves is they are very misleading things. In the early stages people basically take the line between whatever we were last year, and this year and they interpolate linearly, and they say, God, things are moving so slowly. Then as the exponential curve starts to pick up... people realize that even with the linear interpolation where we'll be next year is just mind blowing."
Her concern — "the thing that keeps me up at night" — is not the distant specter of superintelligence. It is something more immediate and more insidious: "the erosion of rigor from the seductive plausibility of generative AI." In a scientific setting, she warns, "an AI hallucination isn't just an error; it's a convincing falsehood that can launch a multimillion-dollar research program down the wrong path. These models are optimized for fluency, not factual accuracy, creating a powerful 'illusion of causality.'"
This is vintage Koller: the person who built her entire career on the mathematical formalization of uncertainty, now worried that the most powerful tools ever created for generating plausible-sounding answers might seduce us into forgetting that plausibility and truth are not the same thing.
The Bridge Builder
If there is a single image that captures Koller's essential quality, it is that of a bridge — not the metaphorical kind beloved of corporate brochures, but the structural kind: a physical object that must bear weight, withstand stress, and connect two sides of a chasm that are, by definition, separated.
On one side: the world of machine learning, with its mathematical elegance, its hunger for data, its breathtaking speed. On the other: the world of biology and medicine, with its irreducible messiness, its regulatory caution, its human stakes. Koller stands at the midpoint, and the strain is visible if you know where to look. She insists, over and over, that AI is not a "magic wand." That the physical world, where "bits meet atoms," is slower and more complex than the virtual one. That the data we need "mostly does not exist yet." That "with most drugs, we do not understand why they work." These are not the statements of a techno-utopian. They are the statements of someone who has spent enough time in both worlds to know that the chasm is real, and that bridging it will require not just cleverness but patience, institutional design, and an almost monastic commitment to generating ground truth.
The team she has built at insitro reflects this conviction. It spans functional genomics, lab automation, medicinal chemistry, and machine learning — deliberately diverse in discipline, united by a shared bet that the right data, interpreted by the right models, can change the most important metric in the pharmaceutical industry: the probability of success.
Koller has described insitro's most impactful recent moment as seeing "the strong impact of multiple AI-discovered targets in both ALS and MASH on functional endpoints in truly disease-relevant model systems." The language is technical, almost antiseptic. Translated: the machine learning models identified biological targets that, when tested in laboratory systems that actually mimic human disease, moved the needle on outcomes. This is not a theoretical exercise. It is evidence that the approach works — that the compass, as Koller has called ML's role in drug discovery, actually points somewhere useful.
The Parallel Founding
In 2020, even as insitro was scaling its operations and closing major pharma deals, Koller co-founded a second company: Engageli, a digital learning platform designed to improve the quality of online education. The timing — the early months of the COVID-19 pandemic, when every university on the planet was scrambling to move online — was coincidental but resonant.
Engageli is smaller and less celebrated than Coursera or insitro. It has raised over $47 million. Its premise is that the virtual classroom, as most institutions experienced it during the pandemic, was a degraded facsimile of in-person learning — all broadcast, no interaction. Engageli's platform is designed for active learning: small-group breakouts, real-time collaboration, instructor feedback loops that mimic the spontaneous dynamics of a physical classroom.
The founding of Engageli reveals something about Koller that the drug-discovery narrative can obscure: she never stopped caring about education. The thread that runs from her Stanford teaching experiments in 2009, through Coursera, through Engageli, is the same conviction that learning is not a passive activity — that it requires engagement, feedback, and community. "We need to create systems that are trained using reinforcement learning on not just any people, but on students who are in the process of learning," she has said. The sentence is revealing in its conflation of pedagogy and machine learning: for Koller, they are the same problem, expressed in different domains. How do you help a system — whether human or artificial — learn from its own experience, in the presence of uncertainty, with limited data and high stakes?
The Superpower She Almost Didn't Claim
In her 2025 interview with Fortune's Diane Brady, Koller offered a rare moment of personal reflection: "I don't know what genius is, but I can tell you that one of the things that I consider to be my superpower, trying to avoid that female imposter syndrome, is that ability to connect the dots across different disciplines and see connections that are oftentimes maybe obvious in retrospect, but weren't obvious at the time."
The qualifier — "trying to avoid that female imposter syndrome" — is doing a lot of work. Koller has spoken publicly about the challenges of being a woman in computer science, about the importance of men recognizing women's contributions in meetings, about the narrowness of the demographic that controls AI's development. "It's a problem when you have a set of technologies that are going to be so impactful on society and so much of the decision-making is in the hands of a very small, homogenous group of people," she has said. "We're going to miss out on so many opportunities and be vulnerable to so many potential pitfalls if we don't have a diverse group of people shaping these technologies."
But the more revealing statement is the one about connecting dots. This is, in fact, what Koller does — not as a metaphor but as a method. Probabilistic graphical models are, at their core, a formalism for connecting dots: for representing the relationships between variables, for tracing how evidence in one part of a network propagates to inform beliefs in another. Coursera connected the dots between elite institutional knowledge and global learners who had no access to it. Insitro connects the dots between high-throughput biology and computational prediction. Engageli connects the dots between the pedagogy of physical classrooms and the infrastructure of virtual ones.
The pattern is not accident. It is structure. And the woman who sees it most clearly is also the one most reluctant to call it genius.
The Human in the Loop
When Lex Fridman asked Koller whether most people are good, she answered without hesitation: yes. "I think most people are fundamentally good," she said. "They want to do the right thing. They want to be good parents. They want to be good neighbors." When he asked about the meaning of life, she spoke about her children, about the experience of seeing them grow, about the "privilege of being able to do work that you think matters."
These are not the answers of a techno-utopian or a Silicon Valley disruptor. They are the answers of someone who builds machines to help humans — and who never loses sight of which side of that equation matters more. "I think that the future is in a partnership between the human and the machine," she told Fortune. "I think for every technology that we've constructed in the past, people are like, 'Oh, my God, this is going to take away my job.' And in fact, it did take away a lot of jobs.... But I think human creativity, human innovation, is still something that is an important partner."
The word she reaches for is not "tool" or "instrument" or "platform." It is partner. This is the vision that unifies everything Koller has built: not machines that replace human judgment, but machines that extend it — that hold up a mirror to the complexity of the world and say, here, look, there's a pattern you couldn't see. The breast cancer microenvironment. The premature infant's respiratory signal. The biological subtypes hiding inside what we call a single disease.
Whether insitro's approach will ultimately deliver the transformative medicines Koller envisions remains uncertain. Drug development timelines are long — a decade or more from target identification to approved therapy — and the history of the pharmaceutical industry is littered with promising approaches that failed at the last, most expensive hurdle. The company's partnerships with Gilead, Bristol Myers Squibb, and Moorfields suggest that the industry's most sophisticated players find the bet worth taking. But a bet is what it remains.
Koller knows this. She has built her entire intellectual career on the formal representation of uncertainty. She is not the kind of person who confuses a promising result with a proven one. What she is — what the MacArthur committee perhaps saw two decades ago, what her students and collaborators have seen up close, what the 162 million Coursera learners have benefited from whether they know her name or not — is someone who responds to uncertainty not with paralysis or false certainty, but with a very specific kind of action: the design of systems that learn.
In Portola Valley, California — her listed hometown, in the hills above Stanford — Koller continues to build. The bio-data factory runs. The machine learning models train. The disease-in-a-dish models accumulate, millions of cells on millions of plates, each one a tiny experiment in the probabilistic structure of human illness. Somewhere in that data, she believes, are the patterns that will tell us not just what a disease looks like, but why — and what to do about it. The calculus of biology, awaiting its Newton.
Or perhaps its Koller.