iPaaS and the return of the Enterprise Service Bus
Many enterprises struggle with diverse technical estates where building processes that span more than one system can become a quagmire of mismatched protocols, data models, and security contexts.
The “enterprise service bus” (ESB) is an integration that offered to solve this problem by connecting these systems together via a central platform. This platform allows you to map data between systems, apply business rules, and build process workflows. This “hub and spoke” style of architecture meant that you only needed to build each system integration once, allowing you to build as much logic as you need in the centre.
A large vendor ecosystem grew up to serve this centralised style of integration architecture, including platforms such as Informatica Powercenter, Mulesoft, and Microsoft Biztalk. They all worked on the same principle, offering adaptors to connect to the more common types of data store, a script-based engine for building processing workflows, and facilities to manage execution and monitoring.
The basic idea was an alluring one for anybody grappling with a complex and diverse systems domain. You could solve concerns such as orchestration, mediation and transformation in one convenient solution that could be made to look very elegant on an architecture diagram.
The problem is that these platforms all suffered from the same flaw: centralising integration logic in this way made them significant development bottlenecks. ESB solutions inevitably require a significant learning curve to master so only a centralised “integration team” can make any changes. The result is usually a complex repository of untestable and undocumented business logic that everybody is reluctant to change for fear of breaking something.
The reality of managing ESB solutions in production rarely lived up to the simple “hub and spoke” vision offered in architecture diagrams. They became a single point of failure that necessitated the kind of resilient architecture which can be hard to manage. Adoption was expensive and time-consuming with the lack of any realistic migration path giving rise to the mother and father of all vendor lock-ins.
The iPaaS rebrand
The emergence of “integration as a service” (iPaaS) platforms such as Snaplogic and Workato represented a different style of integration solution in several respects. Firstly, they offered SaaS-based delivery so you did not have to manage the integration infrastructure. They also offered broader connectivity connectivity, with built-in support for a much wider range of protocols, formats, and systems. Crucially, they also offered “low code” tooling designed to make the job of building integrations easier.
The promise here was to abstract away most of the boilerplate and spadework associated with building systems integrations. These platforms offered to empower an army of non-technical “citizen integrators” who could build integrations through drag-and-drop tooling. This would do away with the bottleneck associated with integration teams, allowing for a more distributed approach to building enterprise integrations.
In response to this, many ESB vendors have re-branded to“iPaaS” and layered “low code” interfaces over their script-based engines. They have also begun to spread their wings into other related domains such as API management, seeking to become a universal “integration platform” for the entire enterprise. Inevitably, they have been quick to jump on the GenAI bandwagon with tools that offer to save you from the inconvenient hassle of having to understand your data before integrating your systems.
Despite these promises, the same fundamental problem remains the same, i.e. these platforms can quickly become a well of Byzantine complexity. The marketing-driven “low code” promise of being able to liberate “citizen integrators” tends not to stand up to the reality of real-world integrations. Attempting to implement anything beyond the very most basic transformations in an iPaaS platform inevitably involves having to engage with the script engine that is usually lurking just beneath the surface. The end result can be code scattered around an arcane UI that can be hard to reason about at any scale.
There can be a sweet spot for iPaaS platforms around batch “lift and shift” import and export use cases. If you have to regularly move large amounts of data between systems with limited requirements for transformation or mapping, then an iPaaS system can save you from having to write a lot of painful boilerplate code.
However, as soon as any logic starts to creep in then you will quickly wish that you were working with code. The support for engineering tooling for source control, deployment, and test automation is usually pretty shaky, if it even exists. Once you scale up an iPaaS environment beyond the first few dozen pipelines, they can quickly become difficult to organise or reason about at any scale. And that vendor lock-in can kick in remarkably quickly.
What’s the alternative?
The phrase “smart endpoints and dumb pipes” was originally coined by Martin Fowler to describe a more decentralised approach to microservice collaboration. The idea is to keep business and transformation logic out of the infrastructure where it becomes difficult to reason about and test. This logic should live in the applications, i.e. in the hands of engineers who understand the domain.
Although this approach has been used to justify many troubled RPC architectures, there’s no reason why it cannot be used with event messages sent over a centralised message broker. The point is that you keep business logic out of the transport and limit it to simple routing. Concerns such as orchestration, transformation, and business rules should live in the endpoints that produce and consume messages.
This requires you to establish a common message format and protocol that every integration application uses. This can be an opportunity to define a canonical data model that can serve as an “anti-corruption layer” preventing implementation detail from leaking between systems. Every application would be required to map to this format, but you keep this mapping logic inside the application domain.
This style of de-coupled choreography tends to enable a more fluid and decentralised architecture. It doesn’t eliminate the complexity, it just makes sure that this is pushed towards domain engineering teams who are better equipped to manage it.
There is no magic bullet for integration. It’s always going to be a knotty problem involving complex requirements that are hard to manage at scale. Containing all your complexity in a single box doesn’t make it any easier to manage. In fact, centralised integration middleware tends to make these problems even worse.