Close

Contact us

Drop us your deets and we'll be in touch as soon as we can.
Scroll down.
Blog

Minimum viable product: The problem with MVPs

Is it time to retire the term Minimum Viable Product in favour of experiments and first releases?

Kelly Sikkemma, Unsplash

We need to talk about MVPs.

The term Minimum Viable Product, or ‘MVP’, was coined in 2001 by Frank Robinson, and popularised by Eric Ries in his seminal book The Lean Startup. Twenty years later, the term has become so popular that it’s made its way into the vocabulary of anyone remotely involved in product development.

But if there is one buzzword in modern product development that has caused the most confusion and misalignment within organisations, it’s MVP.

So, what happened?

Why the term MVP has become unhelpful

The trouble with the term MVP is that, in the years since its inception, the concept has been elaborated and expanded to the point where there is no longer one unified definition of what an MVP is. In fact, there are now multiple definitions, varying greatly in both purpose and complexity.

In the beginning, MVP was defined as, “the version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” This involved launching a rough and ready product with the minimum feature set required to start collecting feedback.

However, as Eric Ries wrote in The Lean Startup, critical hypotheses can be tested without building a product at all. He refined the concept of MVP to be, “the smallest possible experiment to test a specific hypothesis.” Using a Lean Startup approach, even a paper prototype is technically an MVP.

There’s also a third use for the term MVP that we hear frequently, which is the first release of a new product, with the minimum features required for the user to understand the concept and start using it. The purpose of the MVP is to understand initial user engagement, with the understanding that the product will be iterated on in future.

The issue these multiple definitions creates is that, while everyone in an organisation might be using the same terminology, their exact definition of an MVP is likely to be misaligned, often drastically, without anyone even realising that is the case.

Different interpretations of MVP can lead to major confusion and frustration between product teams and stakeholders.

While everyone in an organisation might be using the same terminology, their exact definition of an MVP is likely to be misaligned, - often drastically - without anyone even realising that is the case.

Ditching the buzzwords

The focus of this article is not to preach one particular definition of what an MVP is. If it was, then that would leave you – the reader – with the task of trying to get everyone in your organisation aligned to the same definition, which would most likely be an impossible task.

Instead, I’d like to make the case for retiring the term MVP altogether and promoting a simpler vocabulary.

Next time someone in your organisation discusses building an MVP, try asking them, “Is this an experiment, or a first release?”

Experiments vs. first releases

Experiments are designed to test one or more hypotheses that would need to be true in order for a product idea to be successful. Experiments can be simple, experiments can be complex; either way, they should all:

  • Have a defined start and end time
  • Have an expected, measurable result
  • Be thrown away once they have achieved their purpose

In product discovery, teams can (and should) use multiple experiments to test different hypotheses about their idea. Once a team has learned all they can using experiments and are sufficiently confident, they can commit to building their first release.

The first release of a product sets the foundation for further iterations. The ‘right’ feature set for a first release will depend on the product strategy and competitive environment of the organisation. However, the technologies and code that you write for your first release should allow for future scalability and iteration. Therefore, you should only commit to building that first release once you have validated that this is indeed the right thing to build through experimentation.

Experiment or first release – simple enough, right?

Let’s put our new vocabulary to the test with some common scenarios that catch teams out when using the term MVP.

Conflicting outcomes

As mentioned previously, the definition of MVP has become polarised to the point where their very purpose can be unclear within an organisation.

When teams build experiments – especially higher-fidelity experiments – and call them MVPs, they might find stakeholders within the organisation requesting features and setting targets. This can be a tough situation to deal with, and it all originates from different interpretations of MVP. From the team’s point of view, this was an experiment to generate learning. For stakeholders, the product looks and feels real, so their expectation is that this is the first release of the product.

If the product team is then faced with achieving business results instead of throwing the experiment away, often this results in having to manage and iterate on a foundation that simply doesn’t scale. It could be that the codebase is unsustainable or uses inflexible plugins, or there may be manual processes behind the scenes which simply don’t scale, and the whole process becomes incredibly painful.

How could a change in language have avoided this? Instead of ‘building an MVP’, if the team had said they were ‘running an experiment’, with a defined hypothesis and start-end period, then stakeholders should be much better aligned with the fact that this is not a fully-fledged product designed to deliver business value.

For stakeholders not familiar with this approach, it’s key that the team sells the value of validating or disproving an assumption in a low cost, low risk way. The alternative is to find out the risky and expensive way, by building and launching the whole product!

Even better, the team could run some low-fidelity experiments first. This gives stakeholders a grounding in experimental product development long before the team starts building anything that looks ‘real’.

"Instead of 'building an MVP', if we use the term 'running an experiment', with a defined hypothesis and start-end period, then stakeholders should be much better aligned with the fact that this is not a fully-fledged product designed to deliver business value"

What happens after an MVP is built?

If a team is lucky enough to have launched a quickly-developed MVP and validated their idea, then they may be ready to retire the MVP and build their first ‘proper’ release.

When teams pitch for funding to replace an MVP with a scalable first release, they’re often met with frustration. Why are you asking for even more money to build the same thing again!?

From the stakeholder’s point of view, the frustration of having to spend more time and money replacing an MVP with something scalable is understandable when you realise that their understanding of the term MVP is aligned with that of a first release, rather than an experiment, in contrast with the product team’s definition.

Given the issues raised, maybe it’s better to simply not use the term MVP in the first place. Sticking to the language of experiments and first releases can help product teams and stakeholders stay aligned on outcomes, time and cost.

The first release of a product needs to be scalable and well thought through, so we need to promote the idea that we only commit to building a first release after a period of validation through experimentation.

Even if stakeholders accept that MVPs are there for learning, an MVP gathering user interest can become a victim of its own success. If the team has not prepared stakeholders for the MVP to be retired, the excitement of a successful MVP can result in pressure from the organisation to capitalise on its success. Again, this comes down to lack of knowledge around the poor scalability of MVPs.

Not being clear on the purpose of an MVP can also result in teams being unable to let their idea fail. If there are stakeholders who aren’t aligned with learning as a sole outcome, then pressure can be applied to ‘make the MVP work’, to the point of painful iteration. Using the language of experiments and first releases allows product teams to separate themselves from the idea failing, and the team failing.

Do we need to build anything at all?

It’s also important to highlight a common preconception of MVPs – that, because it contains the word ‘product’, the team needs to build something in order to start learning.

This is another reason why MVP has become a rather unhelpful term. Teams can learn a great deal without having to build an end-to-end product experience – or anything at all for that matter.

A great example of this comes from the innovation team at the British Red Cross. They had an excellent new initiative to launch a social enterprise coffee cart. It was in development when the first lockdown hit the UK and presented an entirely unique set of constraints, essentially preventing any real-life pilot programme being launched.

Instead of a pilot, the team ran a series of experiments to validate their approach, without ever having to actually ‘build’ a real life product. You can read more about how they designed their experiments and the insights they uncovered here – although not a digital product, the approach is definitely applicable for product teams.

Jumping straight to a release

Taking the term MVP out of our vocabulary also helps us avoid another common pitfall – committing straight to a first release.

If you ask a product team looking to build an MVP whether it’s an experiment or a first release, and their response is first release, this presents an opportunity to ask what the team has learned through experimentation. If they haven’t run any experiments, that’s a red flag that the product still has a number of risks associated with it.

Any product idea can and should be de-risked by running experiments aligned to the four types of product risk – value, usability, feasibility and desirability. That process allows product teams to learn more quickly and more cheaply, and avoid committing to a first release that could be fundamentally flawed, and likely to result in an expensive failure.

Read next

How to de-risk new product development ideas – An overview of some product management techniques that allow product teams to stress-test their next big idea and find out whether it’s a rocket ship waiting to launch or a dud in the making.

User testing and validation – We chat to Bríd Brosnan, Innovation Officer at British Red Cross, about user testing, validation and mapping assumptions.

Why big bang product launches fail to take off – On the face of it, a big product launch sounds like a great plan. However in practise, it can set product teams up for failure.

More in this series

We love a good chat

Honestly. Sometimes you can't shut us up. Something you want to talk to us about? Excellent stuff.