Allow me to begin this article with a debatable statement: "Evolution of software architecture is a very misleading term." I say this because there is so much literature that elaborates on this alleged evolution, but in reality, very little has changed.
Perhaps the most impactful change is how software design moved from monolithic applications to collections of small, meaningful, autonomous, and isolated services capable of composing a large-scale, enterprise-wide business solution. This concept is microservices-oriented architecture.
A similar idea, called component-based architecture, was in use a few years ago. Many argue these are two entirely different concepts, and there is some merit to this suggestion, but it also shows that software architecture is journeying within circles. Perhaps that is why monolithic architecture enjoyed such a long life and might be why a trait associated with it is considered a standard for some emerging architectures. One such situation presented itself to me recently.
[ Related reading: 5 steps to migrate from monolith to microservices architecture ]
A migration anecdote
Recently, I oversaw a migration from the IBM WMQ Java Message Service (JMS) backbone to Red Hat AMQ. The organization also wanted to migrate the JMS backbone from traditional virtual machine (VM) hosting to Red Hat OpenShift.
We began with a deep-dive session reviewing the existing architecture, its pain points, migration expectations, future objectives, and a few more parameters, which resulted in the finalization of a new architecture. As the session progressed, AMQ continued to exceed expectations, not only with core requirements around JMS but also by establishing how its modern, flexible, and innovative approaches can achieve goals at a much lower maintenance cost. But then came a flipper—in the game of cricket, a spinning delivery that bamboozles the batter by holding its line instead of moving sideways. The customer asked, "Our currently operational JMS layer, based on IBM WMQ, is very stable. We want to achieve a similar level of stability after the migration. How will AMQ ensure that?"
This question is fascinating for several reasons. One, AMQ is designed to operate in traditional VM environments and in containerized environments. Stability differs in core implementations based on the hosting environment. Also, the WMQ version the organization was using was not designed for containerization, thus making it a monolithic platform. So while both products are somewhat comparable when hosted on VMs, containerization in AMQ makes them polar opposites. While both are JMS-compliant platforms, AMQ offers a lightweight process that relies on the underlying containerized platform for stability, resilience, scalability, and elasticity. This is where we needed to place our emphasis. The question about stability is valid, but it originates from experiences in an entirely different world.
[ Modernize your IT with managed cloud services. ]
Historically, monolithic deployments achieve reliability and stability with vertical scaling, using multiple replicas of the same installation base operating behind a load balancer. This design offers high availability and, with the right choice of application platform, ensures fault tolerance. This is how WMQ operates.
Still, there is a limit. The JMS backbone's availability is tightly coupled with the number of nodes available within the cluster. Fewer nodes in the cluster mean greater risk of a service becoming unavailable. If the remedy lies in adding more nodes to the cluster, there is an associated cost regarding hardware, licensing, and operations.
On the other hand, AMQ operates on OpenShift but delegates each node's lifecycle management to the underlying platform. OpenShift enables the uninterrupted availability of the JMS backbone, required cluster nodes, and hardware resources. Additionally, the services are self-healing, whether recovering from a node-level failure by provisioning a replacement or resuming operations from the point of failure, thus thinning the burden on administrators to make manual interventions.
Monolith deployments are costly and laborious, which is valid for some basic operations, like scaling. This concern is rooted in the static nature of its infrastructure. For example, scaling up or down horizontally and vertically requires increasing capacity by adding more memory or processing power to existing VMs, provisioning and preparing new VMs, deploying intended software, and adding these nodes to the existing cluster. This scaling usually comes after a process that identifies a requirement and determines that operational infrastructure is insufficient.
This process is very effective in drawing parallels between WMQ's monolith and AMQ's containerized deployment. Concerns about WMQ's administration being laborious are valid because even automated processes that handle scaling don't add the elasticity that AMQ can realize from OpenShift. It can leverage the underlying capabilities of continuous monitoring to assess performance around thresholds set for memory consumption or CPU utilization. Autoscaling can then shore up the cluster by adding to a node's processing power or adding more nodes to the cluster. In addition, it is intelligent enough to assess a dip in load and release additional resources where needed.
[ Plan your next cloud project based on your current cloud results by asking these 4 essential cloud project questions. ]
Lessons learned
Fortunately, our explanation made sense to the organization, and the migration effort was successful. It was also a great takeaway for our team that the lack of some groundbreaking evolution in software engineering may misconstrue a fundamental trait. This is why when faced with a puzzle, drawing parallels between the respective worlds and focusing on the individual components may help a lot.
About the author
Iqbal is a software architecture enthusiast, serving as a senior middleware architect at Red Hat.
More like this
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit