A while back, I was hired to create a proof of concept (PoC) application to illustrate how to transform a monolithic enterprise application into one based on the principles of Microservice Oriented Architecture (MOA). The underlying idea for the proof of concept was to apply the Strangler Pattern to slowly remove behavior within existing services in a monolith into new microservices that would emulate the original action. I had high hopes.

The demonstration application I created is called Fortune Cookies. Fortune Cookies has a variety of components that allow a user to register to the application and, as part of that registration process, provide the name of a recipient to whom a random fortune can be sent at a set interval to a set target. For example, John Smith can register to have fortune sent to Mary Brown once a day to her phone via SMS. The content of a fortune might be, Today is your lucky day!

A diagram of the application components is shown below in Figure 1.

The demonstration monolithic application to be strangled

Figure 1: The components that make up the demonstration monolithic application to be strangled

Fortune Cookies is not rocket science, but nonetheless, it does represent a viable use case against which to imagine a transformation strategy using the Strangler Pattern.

Taking the path of least impact

My strategy for implementation was to take the path of least impact. My desire was to avoid changing working code. I’ve been in too many situations in which one little change that was supposed to be trivial turned into a nightmare of side-effects that took days to remedy. Thus, I looked for an alternative, which I was fortunate enough to find.

I decided that the easiest way to move forward was to make it so that the application’s database emitted a message to a message broker whenever a CRUD activity was performed with the monolith’s database. Messaging out on a state change in the database meant that we didn’t need to change the existing source code. Interested parties external to the database could consume internal data without the originating application having to know anything about external consumers. Exporting internal data out of the monolith was an important first step that would allow us to move internal services within the Fortune Cookies monolith out to a new MOA version of the service on a one-by-one basis. We could “strangle” the monolith; hence the name Strangler Pattern. Here it is shown in Figure 2 below.

Strangler Pattern removes services from a monolith one at a time

Figure 2: The Strangler Pattern removes services from a monolith one at a time

The MOA version of the service would just consume data coming off the message broker in a meaningful way to the service. Of course, we’d have to figure out a way to make all the legacy data in the monolith available to the new MOA version of the service when it came online, but that was a problem to solve at another time. The important thing was that I had a strategy to move forward with a microservice transformation process without having to twiddle with existing code. Needless to say, I was pretty impressed with myself.

The best-laid plans

In order to make my strategy work, all I needed to do was to figure out a way to get the database—in this case, a MariaDB database—to emit messages to a message broker. After all, I wasn’t the first person in IT that had to emit messages out of a database. My thinking was that there has to be a solution out there. So I looked around the Internet.

Also, I knew I would need some help. I don’t have the best database programming skills in the world. So, to address my shortcoming, I enlisted the aid of a colleague better versed in database programming than I am.

Between the two of us, we identified an article on the Internet that described how to create and deploy a User Defined Function (UDF) under MariaDB that sends a request to an HTTP server and accepts a response. Granted, this was not a pure call to a message broker, but at least using the HTTP approach meant that we could get data out of the monolith to a target that we could treat as a proxy. That proxy, in turn, could forward that data onto a downstream destination later on. That downstream destination could very well be a message broker.

This is not to say that using an HTTP receiver proxy wouldn’t have issues. For instance, we might run into network latency issues when a massive amount of CRUD activity took place in the monolith’s database, and yes, it would be nice just to have a UDF that supported streaming. But the deadline was approaching, and we needed to deliver the goods. So, we moved ahead.

We created the UDF, installed it in the database, and then made an HTTP request as described in the QuickStart of the UDF documentation. It all worked fine when calling example.com.

At this point, we were full of confidence. We figured, “what could possibly go wrong?” which, if you’ve been in IT for any amount of time, is a sure sign that an unimagined catastrophe is on the horizon. But, we were intoxicated by our technical acumen, so we kept going.

The UDF worked to expectations. The next step was to have the UDF make a call to a custom-made HTTP server I created as the target proxy. Our hopes were high. Then doom raised its ugly head.

The UDF was contacting the custom-made HTTP web server just fine, but no response was coming back. And that the expected response was a detailed acknowledgment that the data sent from the UDF was indeed received and processed by the target meant that for all intents and purposes, the custom-made HTTP webserver wasn’t working.

So, I ran the unit tests for the custom-made HTTP webserver just to make sure. All the unit tests all passed.

The clock was ticking. The delivery date was just around the corner, and both my colleague and I were stuck dead in our tracks. I fiddled with my webserver code. The unit tests kept passing. The calls from the UDF to the webserver kept failing. No responses were coming back. The clock continued to tick, and tick, and tick until finally, the delivery date came and went.

We failed. We were left with lemons, and there was no lemonade to be made whatsoever. We had bet the farm on some code we found on the Internet, and now, despite hours of toil, our efforts were fruitless, no pun intended.

A lesson learned

So, what’s to be learned? When you think about it, we made a mistake that is all too common among developers that have a bent toward open source solutions. We bet the farm on some code we found on the Internet and didn’t take the time to at least ensure that we had the attention and expertise of the person(s) who created that code. We just tried to figure it out on our own.

When it comes to mission-critical code, learn from my mistake. Be prepared to pay for the code and the support that goes with it. Or, you can go it on the cheap and proceed at your own peril.

Wrap up

I hope you enjoyed learning from our lesson in architectural mishaps. Share your own lessons learned with us by contacting us at enable-architect@redhat.com.


À propos de l'auteur

Bob Reselman is a nationally known software developer, system architect, industry analyst, and technical writer/journalist. Over a career that spans 30 years, Bob has worked for companies such as Gateway, Cap Gemini, The Los Angeles Weekly, Edmunds.com and the Academy of Recording Arts and Sciences, to name a few. He has held roles with significant responsibility, including but not limited to, Platform Architect (Consumer) at Gateway, Principal Consultant with Cap Gemini and CTO at the international trade finance company, ItFex.

UI_Icon-Red_Hat-Close-A-Black-RGB

Parcourir par canal

automation icon

Automatisation

Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements

AI icon

Intelligence artificielle

Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement

open hybrid cloud icon

Cloud hybride ouvert

Découvrez comment créer un avenir flexible grâce au cloud hybride

security icon

Sécurité

Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies

edge icon

Edge computing

Actualité sur les plateformes qui simplifient les opérations en périphérie

Infrastructure icon

Infrastructure

Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde

application development icon

Applications

À l’intérieur de nos solutions aux défis d’application les plus difficiles

Virtualization icon

Virtualisation

L'avenir de la virtualisation d'entreprise pour vos charges de travail sur site ou sur le cloud