Migrating from a monolith to microservices: Lessons learned
Allow me to begin this article with a debatable statement: "Evolution of software architecture is a very misleading term." I say this because there is so much literature that elaborates on this alleged evolution, but in reality, very little has changed.
Perhaps the most impactful change is how software design moved from monolithic applications to collections of small, meaningful, autonomous, and isolated services capable of composing a large-scale, enterprise-wide business solution. This concept is microservices-oriented architecture.
A similar idea, called component-based architecture, was in use a few years ago. Many argue these are two entirely different concepts, and there is some merit to this suggestion, but it also shows that software architecture is journeying within circles. Perhaps that is why monolithic architecture enjoyed such a long life and might be why a trait associated with it is considered a standard for some emerging architectures. One such situation presented itself to me recently.
[ Related reading: 5 steps to migrate from monolith to microservices architecture ]
A migration anecdote
Recently, I oversaw a migration from the IBM WMQ Java Message Service (JMS) backbone to Red Hat AMQ. The organization also wanted to migrate the JMS backbone from traditional virtual machine (VM) hosting to Red Hat OpenShift.
We began with a deep-dive session reviewing the existing architecture, its pain points, migration expectations, future objectives, and a few more parameters, which resulted in the finalization of a new architecture. As the session progressed, AMQ continued to exceed expectations, not only with core requirements around JMS but also by establishing how its modern, flexible, and innovative approaches can achieve goals at a much lower maintenance cost. But then came a flipper—in the game of cricket, a spinning delivery that bamboozles the batter by holding its line instead of moving sideways. The customer asked, "Our currently operational JMS layer, based on IBM WMQ, is very stable. We want to achieve a similar level of stability after the migration. How will AMQ ensure that?"
This question is fascinating for several reasons. One, AMQ is designed to operate in traditional VM environments and in containerized environments. Stability differs in core implementations based on the hosting environment. Also, the WMQ version the organization was using was not designed for containerization, thus making it a monolithic platform. So while both products are somewhat comparable when hosted on VMs, containerization in AMQ makes them polar opposites. While both are JMS-compliant platforms, AMQ offers a lightweight process that relies on the underlying containerized platform for stability, resilience, scalability, and elasticity. This is where we needed to place our emphasis. The question about stability is valid, but it originates from experiences in an entirely different world.
Historically, monolithic deployments achieve reliability and stability with vertical scaling, using multiple replicas of the same installation base operating behind a load balancer. This design offers high availability and, with the right choice of application platform, ensures fault tolerance. This is how WMQ operates.
Still, there is a limit. The JMS backbone's availability is tightly coupled with the number of nodes available within the cluster. Fewer nodes in the cluster mean greater risk of a service becoming unavailable. If the remedy lies in adding more nodes to the cluster, there is an associated cost regarding hardware, licensing, and operations.
On the other hand, AMQ operates on OpenShift but delegates each node's lifecycle management to the underlying platform. OpenShift enables the uninterrupted availability of the JMS backbone, required cluster nodes, and hardware resources. Additionally, the services are self-healing, whether recovering from a node-level failure by provisioning a replacement or resuming operations from the point of failure, thus thinning the burden on administrators to make manual interventions.
Monolith deployments are costly and laborious, which is valid for some basic operations, like scaling. This concern is rooted in the static nature of its infrastructure. For example, scaling up or down horizontally and vertically requires increasing capacity by adding more memory or processing power to existing VMs, provisioning and preparing new VMs, deploying intended software, and adding these nodes to the existing cluster. This scaling usually comes after a process that identifies a requirement and determines that operational infrastructure is insufficient.
This process is very effective in drawing parallels between WMQ's monolith and AMQ's containerized deployment. Concerns about WMQ's administration being laborious are valid because even automated processes that handle scaling don't add the elasticity that AMQ can realize from OpenShift. It can leverage the underlying capabilities of continuous monitoring to assess performance around thresholds set for memory consumption or CPU utilization. Autoscaling can then shore up the cluster by adding to a node's processing power or adding more nodes to the cluster. In addition, it is intelligent enough to assess a dip in load and release additional resources where needed.
[ Plan your next cloud project based on your current cloud results by asking these 4 essential cloud project questions. ]
Fortunately, our explanation made sense to the organization, and the migration effort was successful. It was also a great takeaway for our team that the lack of some groundbreaking evolution in software engineering may misconstrue a fundamental trait. This is why when faced with a puzzle, drawing parallels between the respective worlds and focusing on the individual components may help a lot.
Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.