Dan Stroot

Dangerous MVPs

Hero image for Dangerous MVPs
5 min read

Eric Ries, who introduced the concept of the minimum viable product as part of his Lean Startup methodology, describes the purpose of a "minimum viable product", or MVP, this way:

It is the version of a new product that allows a team to collect the maximum amount of validated learning about customers with the least amount of effort.

A MVP is a product with enough features to attract early-adopter customers and validate a product idea early in the product development cycle. The MVP helps the product team receive user feedback as quickly as possible to iterate and improve the product.

Because the agile methodology is built on validating and iterating products based on user input, the concept of an MVP plays a central role in agile development. When building a new product, we all know we should remain agile and launch something small and simple to start. Then over time, we can improve the product by adding more features over time, as previous features prove their value.

Most importantly, an MVP allows your company to minimize the time and resources (the cost) you might otherwise commit to building a product that won’t succeed.

Today, there are better ways to learn

The reason behind implementing an MVP had been to tackle the main issue of learning. It was the minimum cost necessary to learn. The first danger is the old(er) ways are no longer the least expensive.

We simply have better tools for this today. You can learn much earlier, faster, and cheaper — without engineering lifting a finger. Prototyping and design sharing tools such as InVision, Marvel, Figma and more allow designers and Product Managers to concept, learn, and iterate with user tests. While initial iterations in your prototyping can be minimally viable, a finished product should not.

Imagine initial product launches as "first dates" with your potential customers: Do you really want their first impression to be — “I had a minimally viable time”?

When an MVP can become dangerous

Unfortunately, the same process wreaks havoc when it comes to modernizing an existing system. At first, the reason why is not intuitive. In theory, the same principles should apply: Start with small and simple use cases and then build upon your success.

The difference between product development and software modernization is the hypothesis you need to prove.

  1. Product development: You’re trying to prove that customers are willing to pay for the product. You are seeking "product/market fit".

  2. System modernization: People are already using the system. The system is "feature complete" and has been tailored and tweaked over many years. Legacy systems are, after all, successful systems. Instead, the hypothesis we’re trying to prove when we’re modernizing a system is that the updated technology will do a better job than the technology we’re leaving behind.

Reasons why that might not be the case

Modernizing systems that are in heavy use is difficult. When designing an MVP for a modernization effort several issues become apparent:

  1. The use cases that seem like the simplest and the most likely to be identified as MVPs are also the parts of the system that no one cares about. We don’t modernize parts of systems that aren’t being used. We should turn them off.

  2. Putting the least important set of functionality on a new technology violates the spirit of an MVP. It does not validate that the modernization effort adds value, nor does it "prove" the new technology being used is better than the old one.

  3. Defining doing a "better job" than the old software is surprisingly difficult. Things like "it will be easier/cheaper to maintain", or "we will be able to implement change faster" are hard things to measure - and may not be true once the new system reaches the same level of complexity and integration as the old system.

  4. No one really understands all the complexity embodied in a legacy system - no one person knows how it all works. When a process is automated, the highly skilled people who designed and built the system disappear because their knowledge becomes embodied in the software. Long term maintenance and enhancements means the system is being used in ways not initially considered or designed for, and other processes and systems become intertwined and integrated. This complexity is exactly why existing systems are hard to change. Proving a new system via an MVP doesn't make sense unless it shows exactly how the new system will meet these challenges.

When Validating Migration, Raise the Stakes

The beginning of a modernization effort is a precious period, so don’t waste it. This is the time when the team has the most excitement, the most momentum, the greatest level of stakeholder buy-in and the most money and staff. When people conflate minimum with simple, they squander a critical stage of the effort.

It's far better to begin migrations with the harder parts of the system. It’s a more challenging project, yes — but the hard bits won’t be any easier months or years later. When you successfully migrate a hard part of the system, you’ve proven that the modernization itself can add value in the same way a MVP validates the effort of building something new.

Conquering a hard part of the system also tends to inspire stakeholders, especially if other modernization efforts have failed in the past. Sometimes this enthusiasm takes the form of increased moral support for the team, but it can also lead to more resources and more favorable prioritization.

MVP's also need to prove architectural principles

All too often MVP's ignore basic architectural requirements in the interest of "low cost" and "speed". If the list below isn't addressed during the MVP development & learning process you will have issues in the future. Those issues could be large enough to derail the whole effort.

  • Security — basic security requirements need to be taken into account for the MVP. These requirements must be able to support a "production-quality" app in the future. Today, security is a core part of the user experience.
  • Monitoring — every application should provide monitoring capabilities to monitor performance and system issues. This is tablestakes for a production app, but also super important to learn about the operating qualities and failure states of the MVP.
  • Platform — The MVP should run an the same platform that would be used production. It's a mistake to build something "in house" and then try to move it to a commercial cloud platform (or vice versa).
  • Latency and responsiveness — we shouldn't have immediate concerns about latency and responsiveness, since the MVP deployment will be limited to a small user base, but we do need to learn what is acceptable to the users. Latency/responsivness metrics must be included as part of the basic monitoring capabilities.
  • Scalability — not usually a concern at first but the architecture of the MVP must provide a scalability path to maintain the latency and responsiveness defined above.
  • Data Persistency — some MVPs start with a "toy" database, or data persistency model that must be completely replaced for the "real" application to be properly developed. In some cases this makes sense but if possible build the MVP on the right data persistency platform from the start.

The Takeaway

MVPs are useful because they validate that a project is worth doing before we've wasted too much time and money. A system modernization project hasn't proven its value until it has successfully migrated the important (read: hard) parts. For that reason, the right MVP for a modernization effort is often a complex part of the system.

If you pick an easier part, or don't address basic architectural requirements, you will waste time, money, and maybe political capital doing work that does not prove that this new technology can be trusted with your most critical business functions. This turns deadly if the project fails months or years later, after millions of dollars have been spent, all because the "MVP" didn't prove the right hypothesis at the start.

References


Image Credit: Sammy Sosa, 1998

Personally, I will argue this one with Cub fans until I die. Yes, Sosa’s team won the Wild Card, and it was a huge story. But the Cardinals finished five games behind them, and it sure as hell wasn’t McGwire’s fault that Jeff Brantley was the Cardinals closer, their bullpen blew an extraordinary number of games, and their starters were not very good either.

Sharing is Caring

Edit this page