We all want to do the right thing. We all want the best outcome. Which means, we all need to abide by best practice, right? When it comes to a legacy enterprise environment, this might not always be correct. Sometimes even when a team makes a technically sound decision and moves forward, they can run into problems. This is because the best technology fit might not be determined on technical merit alone. Organizational fit must also be considered.
Respecting the environment
When it comes to modernizing existing technology, sometimes a consultant or new product is brought in to “fix everything”. However, any “fixer,” no matter how good, must be aligned with the policies and processes of the enterprise in order to successfully improve any existing state.
When there is no effort to understand the environmental effect on architecture choices, existing application code can be seen as “bad” or “incorrect.” Bad code does exist, but sometimes the code is the way it is because choices were made to deal with an inoperative silo blocking what should have been the correct approach. Any attempt to change the state of an existing application without this understanding could result in a team ending up in exactly the same place.
Getting around difficult silos affects architectures
As discussed previously, deeply entrenched silo culture often results in alienation.
Remember Conway’s Law: The processes of an organization mirror the organization’s own communication structure? This is very obvious in the architecture of enterprise applications. In enterprises, silos sometimes guard their best technology—so much so that sometimes teams external to that silo will use a less elegant solution rather than deal with the frustration of a difficult team. From a Conway’s law point of view, a trend might appear where a portfolio of applications solve a particular problem in a very non-standard way.
Here is an example of what I mean. I once worked in an environment where I noticed architectures were utilizing in-memory caches and even GitHub to store records. Seemed odd to me. Why not just use a database? After supplying an application that used a DB to store certain records, we reached out to the database team to get a non-prod and prod database provisioned. After a very frustrating and fruitless interaction, we quickly realized why teams were avoiding using a DB, even when it made sense.
Sunk costs and saving face
It may come as a shock to anyone studying for their Cloud and Kubernetes certs to know that mainframe computing, which has been in production for a much longer time than Kubernetes, works well and is very reliable under load.
It’s also important to remember that this technology is very expensive. As critical workloads are still being run in these environments, many enterprises are still deep into the costs of maintaining data centres. IT teams do not have infinite budgets. And as the old saying goes, “If it ain’t broke, don’t fix it.”
Compromises sometimes have to be made in favour of building upon an existing investment that seems old or antiquated due to existing contracts and financial commitments.
This can make the task of “improving” the state of an existing application an interesting one. It could be that the desired future state management wants for the application might seem like just another “legacy” option compared to what the industry is doing.
For example, the goal might be to migrate an application from Java 6, a very old version of Java, to Java 11 which, although newer, is still very old. This may be because an application server approved for use supports Java 11 as its “most current” JVM version. Changing the application server is out of the question because money was already spent on licensing and a multi-million dollar contract was already spent with a consulting partner to operate it. So the team will need to migrate to Java 11, even though Java 17 is now stable and widely used. To some technologists this is unthinkable, and yet they are routinely assigned to these types of projects.
Regulation and public responsibility
As Spider-Man reminds us (well, Uncle Ben specifically), “With great power there must also come—great responsibility!” Many enterprises have gotten so big and successful over the years that they are now subject to stringent regulations. This can make the highest levels of management less inclined to embrace technology change.
For example, in Canada the OSFI oversees how financial institutions utilize things like public cloud. OSFI has the authority to take serious action if standards aren’t being upheld. It’s my experience that management takes this sort of thing very seriously.
You might say, “my enterprise has never had those sorts of issues.” However, if your enterprise did, it’s likely only specific teams dealt with the consequences and the average developer, architect or manager within the enterprise would not be aware.
Lessons learned over time
Building on the last point, when problems like this do happen, someone gets tasked with ensuring this never happens again. These become security standards and/or operational policies and are often unique to the enterprise.
In most cases, a separate silo makes sure these standards or policies are enforced (e.g., security, audit). Notice I said “enforced”, not “followed.” The difference is important. In many cases that I have seen, the attitude of these teams is to point out when other teams run afoul, but not to guide them in how to do better.
Selecting products or strategies to improve the state of an existing application without full awareness of these policies and standards can lead to lost time and frustration. In some cases, the standards or policies themselves might not be current with the leading edge of technology (which is often what teams want to use), so even a team making what they believe to be good choices may still run afoul.
Start-up software competitors that are publicly traded often do not pay a dividend, and investors who buy their stock are more likely to be accepting of the ups and downs that come with software innovation. However, many established large enterprises have dividend-paying stocks and attract risk-averse investors and fund managers. Any setbacks that could happen as a result of modernization might not be very well tolerated in such places. This can make moving existing applications to the public cloud, where the public exposure for error is increased, something that makes management very nervous. No-one wants to explain to the board why millions of private customer records were scooped out of an unsecured storage blob that everyone thought had been properly protected and only contained system logs.
In summary, to successfully modernize an application in an enterprise environment, you need to be prepared to make some engineering compromises that fit with the culture, vendor relationships, history, and corporate pressures of the enterprise.
Wrapping it all up
It is important to set the objective with all stakeholders that setbacks WILL occur. Steve Jobs once said, “People don't know what they want until you show it to them.” I like to expand this by saying People don’t know what they want until you show them what they don’t want—a bunch of times.
Remember, feedback loops are there for a reason. The team needs to be trusted to correct, move on and avoid making the same mistake twice. Enabling the right team to work independently, using feedback and the appropriate resources, while still supporting their swim lane by using an opinionated workflow are in my opinion the best steps toward fruitful modernization in an enterprise environment.
If you made it this far, I sincerely thank you for your time and precious attention. Hopefully some of the content in this series will be helpful to your modernization efforts. Every environment is different, as is every code base. Once the project gets going, you will certainly find different challenges, or issues with what I have proposed here. Please feel free to provide your feedback and comments, I would love to hear them. If you like what you read, please share it with colleagues who might also be considering some modernization projects.
About the author
Luke Shannon has 20+ years of experience of getting software running in enterprise environments. He started his IT career creating virtual agents for companies such as Ford Motor Company and Coca-Cola. He has also worked in a variety of software environments - particularly financial enterprises - and has experience that ranges from creating ETL jobs for custom reports with Jaspersoft to advancing PCF and Spring Framework adoption in Pivotal. In 2018, Shannon co-founded Phlyt, a cloud-native software consulting company with the goal of helping enterprises better use cloud platforms. In 2021, Shannon and team Phlyt joined Red Hat.