A while back, I was hired to create a proof of concept (PoC) application to illustrate how to transform a monolithic enterprise application into one based on the principles of Microservice Oriented Architecture (MOA). The underlying idea for the proof of concept was to apply the Strangler Pattern to slowly remove behavior within existing services in a monolith into new microservices that would emulate the original action. I had high hopes.
The demonstration application I created is called Fortune Cookies. Fortune Cookies has a variety of components that allow a user to register to the application and, as part of that registration process, provide the name of a recipient to whom a random fortune can be sent at a set interval to a set target. For example, John Smith can register to have fortune sent to Mary Brown once a day to her phone via SMS. The content of a fortune might be, Today is your lucky day!
A diagram of the application components is shown below in Figure 1.
Figure 1: The components that make up the demonstration monolithic application to be strangled
Fortune Cookies is not rocket science, but nonetheless, it does represent a viable use case against which to imagine a transformation strategy using the Strangler Pattern.
Taking the path of least impact
My strategy for implementation was to take the path of least impact. My desire was to avoid changing working code. I’ve been in too many situations in which one little change that was supposed to be trivial turned into a nightmare of side-effects that took days to remedy. Thus, I looked for an alternative, which I was fortunate enough to find.
I decided that the easiest way to move forward was to make it so that the application’s database emitted a message to a message broker whenever a CRUD activity was performed with the monolith’s database. Messaging out on a state change in the database meant that we didn’t need to change the existing source code. Interested parties external to the database could consume internal data without the originating application having to know anything about external consumers. Exporting internal data out of the monolith was an important first step that would allow us to move internal services within the Fortune Cookies monolith out to a new MOA version of the service on a one-by-one basis. We could “strangle” the monolith; hence the name Strangler Pattern. Here it is shown in Figure 2 below.
Figure 2: The Strangler Pattern removes services from a monolith one at a time
The MOA version of the service would just consume data coming off the message broker in a meaningful way to the service. Of course, we’d have to figure out a way to make all the legacy data in the monolith available to the new MOA version of the service when it came online, but that was a problem to solve at another time. The important thing was that I had a strategy to move forward with a microservice transformation process without having to twiddle with existing code. Needless to say, I was pretty impressed with myself.
The best-laid plans
In order to make my strategy work, all I needed to do was to figure out a way to get the database—in this case, a MariaDB database—to emit messages to a message broker. After all, I wasn’t the first person in IT that had to emit messages out of a database. My thinking was that there has to be a solution out there. So I looked around the Internet.
Also, I knew I would need some help. I don’t have the best database programming skills in the world. So, to address my shortcoming, I enlisted the aid of a colleague better versed in database programming than I am.
Between the two of us, we identified an article on the Internet that described how to create and deploy a User Defined Function (UDF) under MariaDB that sends a request to an HTTP server and accepts a response. Granted, this was not a pure call to a message broker, but at least using the HTTP approach meant that we could get data out of the monolith to a target that we could treat as a proxy. That proxy, in turn, could forward that data onto a downstream destination later on. That downstream destination could very well be a message broker.
This is not to say that using an HTTP receiver proxy wouldn’t have issues. For instance, we might run into network latency issues when a massive amount of CRUD activity took place in the monolith’s database, and yes, it would be nice just to have a UDF that supported streaming. But the deadline was approaching, and we needed to deliver the goods. So, we moved ahead.
We created the UDF, installed it in the database, and then made an HTTP request as described in the QuickStart of the UDF documentation. It all worked fine when calling example.com.
At this point, we were full of confidence. We figured, “what could possibly go wrong?” which, if you’ve been in IT for any amount of time, is a sure sign that an unimagined catastrophe is on the horizon. But, we were intoxicated by our technical acumen, so we kept going.
The UDF worked to expectations. The next step was to have the UDF make a call to a custom-made HTTP server I created as the target proxy. Our hopes were high. Then doom raised its ugly head.
The UDF was contacting the custom-made HTTP web server just fine, but no response was coming back. And that the expected response was a detailed acknowledgment that the data sent from the UDF was indeed received and processed by the target meant that for all intents and purposes, the custom-made HTTP webserver wasn’t working.
So, I ran the unit tests for the custom-made HTTP webserver just to make sure. All the unit tests all passed.
The clock was ticking. The delivery date was just around the corner, and both my colleague and I were stuck dead in our tracks. I fiddled with my webserver code. The unit tests kept passing. The calls from the UDF to the webserver kept failing. No responses were coming back. The clock continued to tick, and tick, and tick until finally, the delivery date came and went.
We failed. We were left with lemons, and there was no lemonade to be made whatsoever. We had bet the farm on some code we found on the Internet, and now, despite hours of toil, our efforts were fruitless, no pun intended.
A lesson learned
So, what’s to be learned? When you think about it, we made a mistake that is all too common among developers that have a bent toward open source solutions. We bet the farm on some code we found on the Internet and didn’t take the time to at least ensure that we had the attention and expertise of the person(s) who created that code. We just tried to figure it out on our own.
When it comes to mission-critical code, learn from my mistake. Be prepared to pay for the code and the support that goes with it. Or, you can go it on the cheap and proceed at your own peril.
I hope you enjoyed learning from our lesson in architectural mishaps. Share your own lessons learned with us by contacting us at firstname.lastname@example.org.