Monoliths and the microservices backlash

Amazon Prime Video recently blogged about how they optimised their monitoring service by replacing a microservice-based approach with a single service. This has given rise to an avalanche of commentary and bad takes, some of which are declaring this as evidence that microservices are a scam and we should all be reverting to building monoliths.

Amazon's approach is based on straightforward refactoring. They stood up the service quickly using Lambdas and Step Functions before they had a clear idea of how traffic might stack up. Once the system had matured a little, they optimised it to a containerised service to better manage scale and costs. It's interesting case study, but it doesn't amount to a repudiation of microservices or serverless architecture.

It's perfectly reasonably to optimise a serverless application with larger-sized components when dealing with long running compute jobs or predictable high traffic. It's also reasonable to evolve your architecture in response to a growing understanding of your problem domain. What's not reasonable is maintaining a dogmatic attachment to any single pattern.

“Year zero”

There's nothing new about service-based architectures. The advent of microservices was not a “year zero” in terms of working out that systems can be composed of small, collaborative units.

The term “microservice” was originally coined to describe the emergence of an approach that delegated more responsibilities, and ultimately freedom, to implementation teams. The hope was teams would be free to build capabilities without being constrained by centralised orchestrators or top-down decision-making.

It didn't always work out this way, mainly because of the sheer complexity involved in deploying small services at scale. Instead, many microservice architectures have coupled delivery teams to complex and centralised internal developer platforms, mostly built around Kubernetes.

The growing realisation that microservices architectures might not be a panacea has led to an inevitable backlash. Some have even suggested that microservices are just another example of a long line of doomed distributed architectures that include Java Beans, Distributed COM, and ws-anything. Microservices are merely condemning another generation of engineers to drown under the weight of remote procedure calls between distributed components.

There's some truth in this. Just ask anybody dealing with estates of hundreds of microservices. What promised to bring flexibility and scalability can often lead to runaway complexity and punishing costs.

Understanding the business value

There is a more nuanced debate to be had about distributed architectures. There is nothing new in the notion that loosely coupled and highly cohesive components might be easier to operate. The problem is that architectural choices are not always aligned to what the business is trying to achieve.

Any decision should be driven by a clear understanding of how an architecture might add value and the trade-offs involved. It's easy for this kind of technology debate to take over at the expense of a clear-eyed understanding of the problems you are trying to solve. You need to know what the desired business outcomes and how the architecture is aligned to them.

The wider engineering context is important too. Before you go anywhere near a microservice architecture you need to solve problems such as deployment automation, environment provisioning, tracing, and monitoring. You need appropriate governance to define "rules of the road" around how services will collaborate. There should be an agreed approach for establishing service boundaries and managing dependencies.

You also need to factor in the capabilities of your engineering teams. If your systems are riven with arcane code that cannot respond to change, then a move to microservices won't magically transform your engineering culture. You may just end up distributing bad code instead, while adding a bunch of bad infrastructure while you're at it.

Distributed applications are hard

Microservices are often associated with making life easier for engineers, but the reality can be a mixed bag. They can reduce the cognitive overhead as engineers only have to focus on a narrow set of concerns. Code is easier to organise and less prone to the kind of tangled sprawl that can affect larger code bases.

On the other hand, engineers have to be aware of a much wider and complex estate of services. Transactions and longer running processes that skip through multiple services can be difficult to reason about. Abstractions can be hard to defend on that scale and you need iron discipline to prevent service bloat and feature duplication.

The picture becomes murkier still once you start considering serverless architecture. Abstracting away much of the infrastructure might make it easier to get initial prototypes up and running, but it also makes it easier to create an unmanageable web of functions. The model of only paying for what you use can also be a double-edged sword when an application starts to scale up, as Amazon Prime discovered.

Much depends on what your engineering teams are comfortable with. As it turns out, distributed applications are hard. Some of the more advanced patterns and practices that underpin microservice architectures are not for the feint hearted.  It can start innocently enough with containers and APIs, but before long you'll find yourself mired in elastic scaling, fault tolerance, rolling deployments, circuit breakers, service discovery, sidecars... and maybe even Kubernetes.

When it comes down to it, a monolith is often just easier.

A false dichotomy

Despite all this, a microservice architecture really can help you to align your technology delivery with business outcomes. It can empower teams to deliver value more quickly and create more resilient, scalable, cost-effective, and flexible solutions. This is all part of an eternal trade-off between scalability, resilience, cost, and complexity that people have been grappling with for generations.

Perhaps there may also be an issue around language here. When most people say “monolith” they usually think of vast code bases behind many a legacy website or desktop application. Nobody is proposing a return to that. I hope.

Many microservice architectures suffer from premature decomposition where tiny, single-responsibility services have been adopted as a starting point rather than an optimisation. This can quickly accelerate into hundreds of services collaborating in real time, creating a “distributed ball of mud” that is hard to row back from.

Much depends on how you define your abstractions and where you put your service boundaries. After all, there are no circumstances where badly designed microservices are an improvement on a carefully designed monolith.

Ideally, a service should be aligned to what Domain Driven Design calls a “bounded context”, i.e. a cohesive and self-contained collection of data and behaviour. This is the key to ensuring that your services achieve some degree of loose coupling while maintaining that longed for cohesion. Premature decomposition has undermined many a service-based architecture. It's usually best to start with something monolithic and modular, decomposing in response to genuine need.

Ultimately, the “microservices vs monoliths” debate is a false dichotomy. An architecture can contain a mixture of containerised services, serverless functions, and there may even be a place for more monolithic applications in the mix. In general, it's good practice to be wary of any “one size fits all” pattern. You should try to take a critical view of the problems ahead of you and select an approach that enables a reasonable flow of delivery, while accommodating ongoing change.