Data meshes and microservices - the similarities are uncanny
The data mesh seems to be doing for data warehousing what microservices did for solution architecture. By that I mean embracing distributed architecture and decentralised governance while remaining a little ambiguous around how this might work in practice.
There are many similarities between the two. For starters, both "microservices" and "data mesh" were terms originally coined by Thoughtworks consultants and introduced to the world on Martin Fowler's blog. As Sam Newman's "Building Microservices" book provided a more detailed exploration of microservices, Zhamak Dehghani's forthcoming O'Reilly tome for data mesh may help to flesh out some of its lingering uncertainties.
The term "data mesh" is already starting to take on a life of its own and is creeping into the marketing materials from vendors of data catalogues, pipelines, and query tools. This can serve to dilute the original idea as the term becomes used in contexts that are very different from the author's original intentions. A similar process happened with microservices as vendors were keen to demonstrate how their platforms could assist in the brave new world of decentralised development.
Ignoring the hype cycle
It's easy to be cynical about the hype cycle aspects of data mesh, but it is trying to address some very real problems in centralised data warehouse implementations. They tend to give rise to a monolithic and inflexible architecture that take a long time to deliver value and struggle to meet the evolving demands of the business. Data is locked up in a centralised infrastructure where dedicated data engineering teams become the gatekeepers and bottlenecks for any data applications.
A close reading of the data mesh implies that it is more of a process than a technology stack or implementation style. If it is possible to state it simply, a data mesh is an architectural approach that delegates responsibility for specific data sets to those areas of the business that have the most expertise in them.
This bears a lot of similarities with the decentralised governance and data management advocated by microservices. This suggests that it is better to devolve ownership and implementation choices to teams that are closer to the domain, rather than situating it in centralised teams with specialised skill sets.
Given this focus on process and governance, a data mesh is not defined by a warehouse, lake, or lakehouse. Neither is it defined by the tools used to integrate, catalogue, or query this data. It's more of an approach to organising data engineering, though this isn't stopping vendors from casting their own products in a data mesh light.
Like microservices, the data mesh collects ideas that have been in circulation for some time. Applying product thinking to data is a common thread among the many recent data manifestos that attempt to hitch agile practice to data analytics. Data democratisation has long been a theme for organisations that seek to empower their data analytics teams. What's new here is the abandonment of centralised data ownership in favour of a more distributed approach.
The perils of a shared platform
A microservice architecture can empower teams by allowing them to decide how best to implement their services. The data mesh promises similar freedoms where the implementation detail for any pipelines or processes are a secondary concern. It's the data sets and how they are exposed that matters in a data mesh world.
In both cases, teams are expected to leverage shared infrastructure so the organisation can achieve some economies of scale. For the data mesh, this involves a self-service "data platform" supported by a team of data product and platform owners that collaborate to define some common "rules of the road". It remains to be seen what this self-service platform might look like, though it does imply a high level of cohesion between data sets so that consumers can combine and transform them at scale.
There can be a fine line between self-service infrastructure and centralised implementation. Insisting on a particular technology stack and set of common tools can feel restrictive for engineering teams and undermine their sense of ownership. Maturing a platform and related standards to the point where they can support genuine self-service is quite an undertaking.
This means that the owners of any self-service infrastructure can inevitably be drawn into implementation concerns as engineering teams grapple with an unfamiliar or immature platform. This can give rise to a distributed yet highly coupled architecture where teams are chained together through a dependencies and bottlenecks. Anybody who has tried to implement microservices on the back of immature infrastructure will be familiar with this outcome.
The mess between the bounded contexts
As with microservices, a data mesh hinges on being able to identify discrete, independent areas of data that can be implemented by separate parts of the business. The concept of the "bounded context" from Domain Driven Design (DDD) is key to understanding these discrete data domains.
DDD recognises that as an organisation gets larger it becomes progressively more difficult to build a unified model of the entire domain. Instead, a larger system can be divided into a set of cohesive and self-contained "bounded contexts", each with separate ownership.
In practice it is difficult to define a cleanly separated and stable set of bounded contexts. Although DDD provides a useful theoretical framework to identify ideal boundaries, many implementations tend to be driven by more practical heuristics. Domain design often reflects existing organisational boundaries, but it can also be defined by technical demands, such as data processing or security requirements. More pragmatic concerns such as available budgets and skillsets can also come into play.
This tends to give rise to constantly shifting and overlapping bounded contexts and it's not clear how a data mesh might accommodate this practical reality. Dehghani does talk in terms of a separation between the aligned domain data sets and the source systems that feed into them. This affords the model some flexibility as source systems can feed into multiple domains, giving each domain some latitude in terms of what source data is used and how to represent it.
Is data mesh ready for prime time?
If data mesh bears some similarities with microservices, the various problems encountered by naive microservice implementation should serve as a warning. Monoliths may be unwieldy, but distributed monoliths are probably worse. Local empowerment might seem reasonable, but this comes at a risk of creating an estate of incompatible data silos.
For a while at least, microservices came to be regarded as a panacea that could be used to attack legacy monoliths, bring effortless scalability, and free teams from mutual dependency. The reality turned out to be a little more nuanced than this.
In some cases, all microservices delivered were increased complexity. Others got their fingers burnt through failing to provide the basic infrastructure needed to support deploying and monitoring services at scale. It has taken time for a genuine understanding to emerge of where microservices really add value - and when you are better off sticking with a monolith.
A similar process of understanding may be required before data mesh finds its niche. It may only make sense for highly federated organisations grappling with genuinely vast quantities of polyglot data. In these circumstances, the data is just too large and unwieldy to contain in a centralised and consolidated implementation. Data mesh provides a set of organisational principles to manage the realities of this complexity rather than trying to tame it.