Blog da Red Hat
Hybrid cloud is a new reality for IT. While the benefits of this model are many, I believe that organizations should embrace hybrid cloud because it can give them more choice. Until recently, our choices to deliver solutions were primarily mutually exclusive of one another -- will I build the app in house on my own? Will I outsource the infrastructure to a service provider? Will I buy a SaaS based app? Will I just put everything in the public cloud? While each of these choices had their own merits, orchestrating them can be difficult. The platform we've advanced, the Red Hat OpenShift platform, is designed to allow for more choice in how to leverage benefits of a hybrid multi-cloud world.
In addition, we've made decisions throughout our IT careers to not source all of our work to one vendor. We do this so that we can maintain competition and some independence. I think of open hybrid multi cloud as innovation for the CIO. We like to imagine CIOs saying, "I am no longer bound to things that have been limiting the speed at which I can move, namely infrastructure and operations, and I can focus on the art of software development while leveraging the capabilities of an open hybrid multi cloud environment."
As the dynamic business environment can require companies to think of themselves as a software company, organizations can look to their IT teams to transform them and the way we can do that is by writing software. Now, when IT teams have a business problem that they can’t fix with existing tools, they can build the solutions themselves. And they can do that on our OpenShift platform. Once the CIO decides to modernize applications in an open hybrid multi cloud way, the CIO can do so in a way that isn’t the monolith that we’ve come to know in the past. Therefore the CIO has the ability to move things around more fluidly and leverage what an open hybrid multi cloud model can afford you. What open hybrid multi cloud can provide from the CIOs perspective is a combination of greater durability and portability.
2018 has proven to me that choice is still king in enterprise IT. But it’s not the same choice that we as an IT vendor have experienced in past years, in no small part due to cloud computing. The selection of software, hardware and where to run it are still very much relevant and thought about by IT decision-makers, but the concept of choice has evolved to encompass more existential questions for IT organizations:
Who do I choose to run my infrastructure for me, if not me?
Why would I not just build my stack myself?
Underpinning these two new questions are two concepts: The cloud and open source.
Who’s going to run it? I really don’t want to run it.
Managed services aren’t exactly new, having existed at a basic level for decades with data-center outsourcing, hosted email, and even capabilities like managed ERP and CRM systems. But the managed services of 2018 existed at a new level, designed to abstract away the complexities of running infrastructure and even services like databases, letting IT teams more often focus on extracting value from their work without the minutiae of maintenance.
Given our history in defining choice as “software” and “footprint,” we could have overlooked CIOs and IT leaders desire to have someone other than their own teams run their infrastructure; after all, this isn’t really a software choice. Given our focus on open source software, this requires a class of managed services offerings that can operate in harmony with underlying open source projects. But we don’t want to be in the business of avoiding challenges. And now, similar to how Red Hat was a leader on traditional open source models, we are seeking to spearhead approaches for managed services through offerings like Red Hat OpenShift Dedicated and Red Hat OpenShift on Azure.
This can bring to bear the expertise that we have in infrastructure software - not only are we building these technologies, but we know how to make them run better, how to enhance the security of them and how to support them. As a managed service provider we are essentially becoming our own customer FOR our customers. This helps to make our products better for the customers who do choose to run our technologies themselves.
Forget running it, why don’t I build it?
The increase in the cloud as a computing backbone and the ready availability of open source software means that IT leaders can take a do-it-yourself approach to IT to create their own custom stacks. Truthfully, I see more and more CIOs are being told that they now should build their own stacks rather than trying to buy specific technologies. This can be beneficial in that they are able to build exactly what they want. But it can also be a liability in that, given the pace of open source development, these custom-built stacks likely cannot consume new innovation at the pace that it occurs, potentially becoming unmaintainable forks.
One of the most challenging parts is that building your own stack can be very effective and efficient in the beginning which is the lure. But that’s the easy part; the future of the stack requires:
Ongoing maintenance, which can become increasingly complex, especially as a deployment expands and hosts more and more intricate workloads and systems. This can require a new, expanded and increasingly specialized set of skills to properly oversee, facets that may not exist internally at a given organization.
Fixes and patches when something breaks (and something will break). The expertise required to understand and solve problems in modern IT is increasing, with layers of APIs spanning the kernel to the orchestration system. When something fails, millions of lines of code across systems may be called into question with subtleties and nuances more complex than in software systems of the past. Failures can be hard to diagnose and even harder to actually fix.
Providing fixes and patches back to the open source community to help limit the chance that an organization will run into these problems again in a future version of the code. This is a critical, and often overlooked, fact of consuming open source projects - just because you fixed it once, it may not get fixed in the community especially if you’re not engaged with the upstream.
It is at the intersection of maintenance, talent and open source influence where I have seen DIY stacks fail. And unfortunately, by the time they are failing, they can be hosting mission-critical applications which can make the stakes even higher. Enterprise CIOs realize how high these stakes can be and should very carefully consider the challenges of build vs. buy, even though there can be pressures placed on them to “only” build.
Of course, we’re going to say that Red Hat can provide help for this - our goal is to provide enterprise-grade products that allow you to consume the open source innovation behind them. We don’t just ship open source components, we staff thousands of engineers on these projects to actually understand the code we ship and we contribute to these communities to help build the influence to be able to get fixes included in newer versions.
Given this new environment of “choice” and the pace of change and innovation that’s happening, the ability to use innovation, from the cloud to containers and beyond, helps makes both CIOs, CTOs and the IT industry more optimistic. There is optimism that we can solve emerging challenges, that we can deliver new, differentiated services and products to our end users and that we can be better prepared for a changing business environment. IT leaders shouldn’t feel overwhelmed by this choice, however - this is where trusted partners like Red Hat come in. We’ve been here before, and, despite the evolving and increasingly cloudy landscape, we’re still here to help build your next-generation IT.
Mike Kelly is chief information officer and Matt Hicks is senior vice president of software engineering at Red Hat.
About the authors
As Chief Information Officer, Mike Kelly is responsible for leading the information technology (IT) organization at Red Hat Inc., the world’s leading provider of open source solutions. Since joining the organization in 2016, Kelly has focused on leading the IT team as they provide the tools and technologies that enable Red Hatters every day. Before joining Red Hat, Kelly served in senior leadership roles at McKesson Corporation, including as Senior Vice President of IT Shared Services and Chief Information Officer, McKesson U.S. Pharmaceutical from October 2012 to August 2016 and Senior Vice President, Enterprise Application Services from May 2011 to October 2012. Kelly also served as Chief Information and Chief Technology Officer of McKesson Specialty Health, a division of McKesson Corporation, from October 2007 to May 2011 and as Chief Information Officer of Oncology Therapeutics Network from October 2005 to October 2007.
Matt Hicks was named President and Chief Executive Officer of Red Hat in July 2022. In his previous role, he was Executive Vice President of Products and Technologies where he was responsible for product engineering for much of the company’s portfolio, including Red Hat® OpenShift® and Red Hat Enterprise Linux®. He is one of the founding members of the OpenShift team and has been at the forefront of cloud computing ever since.