How consultants misuse the term and how to fix it
Everyone loves to say “MVP.” Clients love it. Product managers love it. Even your engineering team loves it— until they realize what’s actually being shipped.
In theory, the Minimum Viable Product is supposed to be a vehicle for learning. A way to cheaply test a hypothesis, validate core assumptions, and decide whether to keep investing. It’s not supposed to be a phase-one launch, a milestone in a roadmap, or a shippable slice of the full vision. It’s supposed to answer a question: Should we keep going?
But in practice, MVP has become a euphemism. Teams slap the label on any undercooked release and call it innovation. Clients ask for an MVP when they really want version one. Consultants pitch MVPs as safe bets, scoped-down versions of the Big Build. And everyone walks away pretending they’re doing lean product development when really they’re just burning budget on half-finished software.
When MVPs are treated as low-cost versions of the end-state product, they fail to deliver the one thing they’re supposed to provide: clarity. Real MVPs validate or invalidate. They surface the unknowns. They let teams pivot, kill ideas, or go all-in with confidence. Fake MVPs just prolong the inevitable.
The MVP concept comes from the Lean Startup movement, where it was introduced as a way to avoid building the wrong thing. The idea: don’t ship a product, ship a test. The goal isn’t to delight users, it’s to learn something useful about what users want. Dropbox’s MVP was a demo video. Zappos’ was a shoe store with no inventory. Structured experiments to determine whether the idea had legs to walk on.
In that context, the word "viable" doesn’t mean usable or shippable. It means sufficient to produce evidence. The moment you forget that, you’re not building an MVP anymore, you’re building something else.
In consulting, MVPs get weaponized. The word is used to lower client expectations without lowering ambition. It's a rhetorical trick: we’re not cutting scope, we’re building an MVP. But clients don’t want MVP outcomes. They want results. So instead of reframing expectations, teams stretch the term to fit the deliverable. Eventually, “MVP” just means “whatever we can finish in this sprint/funding round/quarter.”
It’s easy to see how this happens. Sales teams want to close deals, delivery teams want to ship, and clients want momentum. But when everyone agrees to call the first deliverable an MVP, regardless of its purpose or structure, they set themselves up for a mismatch: the team thinks it’s learning, the client thinks it’s launching.
This confusion is rooted in a basic category error. MVPs and phase-one launches are not the same thing. One is an experiment; the other is a milestone. One exists to reduce uncertainty; the other to establish a foundation. They can both be minimal, but they’re minimal in different ways. MVPs minimize effort. Phase ones minimize scope.
Calling a phase-one release an MVP doesn’t make it lean, it makes it dishonest. And it usually means you’ll skip the hard work of defining what you’re actually trying to learn. That’s the real loss: not the technical debt or the bloated backlog, but the opportunity cost of building without clarity.
When MVPs are misused, teams miss the chance to learn. They mistake progress for traction. They overfit to early feedback and underinvest in real discovery. Worse, they create a credibility gap with stakeholders. If you sell an MVP as a usable product, and it’s not usable, you’ve created a trust problem.
There’s also the architectural toll. Fake MVPs often skip instrumentation, skimp on observability, and ignore operational realities. That’s fine if you’re testing a hypothesis, but if you’re building something that’s meant to scale, you’ve just painted yourself into a corner.
If you want to build real MVPs, start by framing the right question. Not "what can we ship?" but "what do we need to know?"
Draft a hypothesis, design the smallest thing that could prove or disprove it, and then be explicit with clients and stakeholders: this is not a prototype, it’s not a soft launch, it’s an experiment.
Then treat it like one. Instrument the hell out of it. Decide in advance what you’ll measure and what you’ll do with the results. Give yourself permission to walk away if the data says no. The best MVPs are the ones that never turn into products.
Above all, hold the line. Don’t let clients conflate MVP with version one. Don’t let your team build for scale before you’ve earned the right. And don’t call something an MVP unless it’s designed to teach you something. Anything else is just wishful thinking in agile clothing.
Leigh Bryant
Editorial Director, Composable.com
Leigh Bryant is a seasoned content and brand strategist with over a decade of experience in digital storytelling. Starting in retail before shifting to the technology space, she has spent the past ten years crafting compelling narratives as a writer, editor, and strategist.