·Business & Strategy
Section 1
The Core Idea
The minimum viable product is the smallest version of a new product that can generate real learning about customers. Not the smallest thing you can build. Not the cheapest thing you can ship. The smallest thing that tests your riskiest assumption — the one that, if wrong, makes everything else irrelevant.
Frank Robinson coined the term in 2001 while consulting with startups in Silicon Valley on synchronising product development with customer demand. Steve Blank refined the concept in The Four Steps to the Epiphany (2003), embedding it in his customer development methodology: stop executing business plans and start searching for business models, using the lightest possible product to validate each hypothesis along the way. Eric Ries brought the idea to a mass audience in The Lean Startup (2011), defining the MVP as "that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort." That definition is precise and worth parsing. The unit of output is not revenue, not users, not features. It is validated learning. The MVP exists to produce knowledge, and it is judged by how efficiently it does so.
The most instructive MVPs in startup history barely resemble products at all.
Dropbox in 2007 faced a problem: the technology worked, but explaining it didn't. Drew Houston couldn't articulate the value of seamless file synchronisation in a way that made non-technical people care. So he made a three-minute screencast demonstrating the product — dragging files between folders and watching them sync across computers. That video, posted to Hacker News and Digg, was the MVP. Not the software. The video. Houston wasn't testing whether the technology functioned. He already knew it did. He was testing whether anyone cared enough to want it. Overnight, the Dropbox beta waiting list went from 5,000 to 75,000 signups. The riskiest assumption — that people would switch from USB drives and email attachments to a new syncing paradigm — was validated before a public beta existed.
Zappos tested an even more fundamental assumption. In 1999, Nick Swinmurn wanted to know whether people would buy shoes online without trying them on first. Rather than build an e-commerce platform, lease warehouse space, and negotiate wholesale deals, he walked into shoe stores in the San Francisco Bay Area, photographed the inventory, and posted the images on a basic website. When someone placed an order, he drove to the store, bought the shoes at retail price, and shipped them. The economics were upside down — he lost money on every sale. That was irrelevant. The MVP wasn't a business. It was an experiment. The question was binary: will people type their credit card number into a website to buy shoes they haven't touched? They did. Swinmurn had his answer, and the answer justified building the real thing. Amazon acquired Zappos in 2009 for $1.2 billion.
Buffer's Joel Gascoigne took the principle one step further in 2010. Before writing a single line of application code, he created a two-page website. The first page described a tool for scheduling social media posts and included a single call-to-action button: "Plans and Pricing." Users who clicked landed on a pricing page with three tiers. Users who clicked a pricing tier landed on a page that said the product wasn't built yet and asked for their email address. The MVP was a pricing page for a product that didn't exist. Gascoigne wasn't testing whether people wanted a social media scheduler. He was testing whether they'd pay for one. The distinction matters. Willingness to use is cheap. Willingness to pay is the assumption that kills most startups, and Buffer validated it before investing a month of engineering time.
The pattern across these cases is identical: the founders identified the riskiest assumption — the belief that, if false, invalidated the entire venture — and designed the cheapest possible experiment to test it. Dropbox tested demand. Zappos tested purchase behaviour. Buffer tested willingness to pay. None of these experiments required a finished product. None required significant capital. Each produced a definitive answer within days or weeks.
The most common misunderstanding of MVP is that "minimum" means low quality. It does not. "Minimum" refers to scope — the smallest set of functionality that can produce a valid test. "Viable" is the constraint that prevents the minimum from becoming an excuse for shipping garbage. The product must actually deliver enough value to generate honest feedback from real users. A landing page that promises a product and collects email addresses is a viable MVP for testing demand. A half-built app that crashes on login is not a viable MVP for anything — it tests only the user's patience.
The second misunderstanding is treating the MVP as a product strategy rather than a learning strategy. The MVP is not Version 1.0 released early. It is an experiment designed to invalidate or validate a specific hypothesis. When the experiment concludes, the MVP has served its purpose. What comes next — iteration, pivot, or abandonment — depends on what the experiment revealed. The MVP is a tool for making decisions, not a tool for acquiring users.