Minimal Viable Architecture and the case for up-front design

It should be possible to test out ideas and establish demand without having to spend months designing battle-hardened architecture. After all, there is no perfect architecture that can be applied across any scenario. The design of a system that allows you to test new functionality quickly may be very different from one that allows you to scale demand.

This implies that new applications can be optimised for rapid iteration. This often involves adopting simple, familiar technologies and minimal infrastructure to begin with. You can let the compromises stack up on the understanding that you may build a more viable production system once you have a better understanding of the requirements.

This is where things can get a little more difficult. The problem is that not every organisation is able or willing to dispose of code in this way. Where engineers see a mess of duct tape and chicken wire, other stakeholders may only see working software. Commercial imperatives can steamroller concerns around scalability and resilience, so you never get the chance to reset the implementation.

If you are trying to demonstrate the value of an idea then you should also be demonstrating its viability, i.e. that it can be developed in a sustainable commercial context. Ideally, it should be possible to take an evolutionary approach to architecture, where earlier iterations can give way to more resilient maturity. If we’re not going to throw code way, this implies that a minimum level of architecture should be in place and a certain level of engineering discipline should be observed.

The case for up-front design

The idea of “big up-front design” has become something of a pejorative term, as agile development demands a more evolutionary approach. The challenge for architecture is to support incremental change rather than second-guessing what might be required in the future. Agile teams do not build against specifications, but slowly discover the shape of the system through an iterative process of feedback-driven evolution.

Evolutionary design still requires some level of up-front thinking and forward planning. You cannot respond to emerging challenges unless you have prepared for them in some way. If you will need to scale an application, then you need to think in advance about how this might happen. You don’t necessarily need to do the work to support it, but you do at least need a plan. If that plan is “throw it away and do it differently” then you need to make sure that everybody is on board with that.

This kind of up-front design is generally referred to as “minimal viable architecture”. It supports an evolutionary and iterative approach to development by ensuring a deliberate yet appropriate level of architecture.

What we mean by “appropriate” can vary between applications and domains. George Fairbanks coined the phrase “Just Enough Architecture” and suggested taking a risk-driven approach to defining what this means. He suggested that architecture should be driven by the question “what are my risks and how do I reduce them” so that design becomes an on-going process of risk management.

This doesn’t mean creating anything that resembles an end-state architecture, but there does have to be a foundation. For example, you will probably need to define how you’ll organise your code and interact with any data stores. There are also a minimum set of concerns that should be baked in from the very start, such as deployment automation and monitoring.

More importantly perhaps, you need to build a good understanding of your requirements, especially the non-functional ones. Evolutionary design is so much harder when you have no idea of what challenges to expect. No process of requirements gathering is exhaustive and things always change, but you should be able to gather a reasonable expectation around architectural drivers such as scalability, resilience, security, and performance.

You should also document your architecture and share it. A simple diagram with some commentary can go a long way in communicating your intent and building consensus. A set of architectural requirements can signal that you’ve thought about how the solution might need to evolve in the future. Drawing up a handful of architectural principles can be a useful collaborative exercise that serves as a guide to decision making. A decision log can help to create a “shared memory” that can prevent unnecessary rework and repetition.

It can be surprising how many decisions can be deferred. The idea of the “last responsible moment” suggests that better decisions are made on facts rather than conjecture. Given that facts take time to emerge, it may be better to delay a decision until an important alternative is being eliminated.  This is a useful thought experiment, but a little risky as a guiding principle. In practice you may only notice the “last responsible moment” when it has passed. Early decisions can also serve to reduce the problem space and bring a little more certainty to proceedings.

Evolutionary design vs planned initiatives

The evolutionary and incremental approach to system design is largely derived from the agile manifesto and its focus on working software, sustainable development, and self-organising teams. This requires a responsive architecture that can be moulded in response to emerging requirements.

The problem is that evolutionary design cannot address every challenge. Some problems are just too big for a single team to solve or need some specialised knowledge and experience. Most modern cloud native architectures require infrastructure than can only be provided on a shared platform basis. There will also be a need for on-going collaboration with other delivery teams.

This does imply the need for more deliberate architecture to complement the evolutionary approach taken by delivery teams. These are the planned initiatives that ensure the right infrastructure is in place to support teams in building solutions. This includes standards and the “rules of the road” that help teams to collaborate effectively. It’s this backbone of shared assets that allows teams to focus on iterating on new value rather than solving generic problems or duplicating effort.