In this post:
Throughout our fictional case study of Davie Street Enterprises (DSE), we have witnessed how embracing modern tech industry best practices have drastically increased the speed at which the company can accomplish its goals while improving overall operational efficiency.
DSE has done this by finding platforms, services, and processes that best align with its goals—made possible by the combination of Red Hat products and our partner ecosystem. In this post, we will focus on those operational efficiencies and how they can be further enhanced through the use of Red Hat Cloud Services, all of which are built on Red Hat OpenShift.
DSE introduced cloud services with Dynatrace
As noted in a previous post on DSE’s digital transformation journey, the company deployed Dynatrace into its OpenShift environment. It selected Dynatrace’s Software-as-a-Service (SaaS) offering as it was the fastest way to get up and running.
Since it has been deployed, DSE has been extremely happy with the value Dynatrace brings through its AIOps offerings, which have drastically improved both Mean Time to Detect (MTTD) and Mean Time to Repair (MTTR).
DSE had been a traditional IT organization in that it self-hosted everything. Even 15 years ago, this was still the best way to get things done. The industry has come a long way in a short amount of time—but that constant refrain of "we’ve always done it that way" holds far more power than it should.
With this implementation, DSE’s operations team proved to internal stakeholders that high-profile systems can be successfully managed and run externally. Several different teams have moved to introduce more cloud services into their mix based on the success.
Big data is a real thing - especially with edge computing
Now that the edge technologies have been put in place to address the problem of maintenance and reliability detection, the focus of that program has been shifted to other areas of DSE that still need that digital seed to be planted.
It’s not that Monique Wallace, CIO, doesn’t want to continue to help Stephanie Wilson, Director of Plant Operations, continue her organization's digital journey—there is only so much bandwidth DSE has to support and maintain new infrastructure deployments.
That leaves Wilson with a problem that she never thought she would have. She has a pool of data that, while growing exponentially, is not being mined for potentially revolutionary optimizations. She plans to start small with any number of simple identified changes, but needs help to do that. As we just mentioned, there is no capability to get support to deploy new products into the corporate datacenters.
Wilson decided to talk to Daniel Mitchell, Chief Architect, to help identify a way to get things done. Mitchell reminded Wilson that there was no reason the idea of using the public cloud had to be disregarded as being too big to address. They could start small as long as they get some support from Susan Chin, Senior Director of Development, and sign-off by Zachary L. Tureaud, Director of Security Engineering.
Tureaud was OK with using services on the public cloud as long as they had a solid and proven track record for security and there was a secured network link to wherever the service was running. He also suggested using existing vendors that have already been formally vetted.
Next up was meeting with Gloria Fenderson, Senior Manager of Network Engineering, to find out if any of the public clouds already had private links in place. Fenderson told Wilson that IT Operations were migrating some internal services to Google Cloud, including eventually mail and calendar. As a result, private network links were already in place.
Mitchell and Wilson both agreed that Red Hat would be the best candidate for the platform because it runs on Google and was an existing vendor. Additionally, Chin’s organization was also familiar with development and deploying on top of OpenShift.
Red Hat OpenShift Dedicated was a natural fit. Its add-ons for Open Data Science and Kafka gave Wilson the ability to bring her own data scientists on board to focus on streaming her data into the platform and building models that could be used to find optimizations that her plants needed to stay competitive.
Improving time-to-market with self-service
Andres Martinez, Principal Developer, maintains a portfolio of client and partner-facing applications that make up the corporate website.
He has heard other teams say how much OpenShift has improved their development pipeline. He started looking into it, but it seemed like a big lift with internal support already stretched thin with all the digital transformation work. Plus, finding the capital to buy any hardware required will likely push this off for another year or two.
Then in a Scrum sprint planning ceremony, Dan Johnson, Senior Director of Sales mentioned his group is leading the migration of the corporate customer relationship management (CRM) solution to Microsoft Dynamics 365. With this new information, Martinez talks with Mitchell, Chief Architect, and they agree moving his group onto OpenShift is in line with the corporate vision of having a unified development platform.
In addition, the Microsoft Azure Red Hat OpenShift offering is the ideal deployment model to use. Martinez’ team can reuse a lot of the tooling that has been built by other teams under Chin, it eliminates the need for upfront capital expenses, and the data the applications use is primarily in the CRM (which is already being moved to the Microsoft environment).
With the switch to Azure Red Hat OpenShift as the new base, clusters can be spun up and torn down on-demand to support the needs of the development and testing teams. This ability to increase the number of parallel work streams available with no lead time for procurement is crucial.
It allows for faster delivery from inception to production of new functionality, which the business is demanding to keep up with the newer competitors in the market. Mitchell will guide the implementation of all this functionality following GitOps principles to ensure hands off deployments that work consistently.
Operational and security benefits of cloud services
Normally, the Senior Director IT Operations Ranbir Ahuja would become concerned with the influx of new technologies into DSE outside of formal programs.
This kind of influx usually causes an extraordinary level of stress on his teams as they are rarely brought into the procurement process early enough, then end up working overtime trying to fill in gaps in the technology to meet internal requirements or trying to scramble to find the capacity to run the new technology on. This time, it’s different.
The pressure is gone and Ahuja’s team gets to have an advisory role to ensure the right parties are involved to ensure service management processes are being updated to reflect who to call. Managed services help address concerns operational teams usually run into when new technology platforms are adopted internally.
Here are a few examples:
Developer-first approach. Managed cloud services above the level of pure Infrastructure-as-a-Service (IaaS) are designed with developers in mind. There are application programming interfaces (APIs), command line tools and web interfaces with views specifically designed for how developers want to work.
Creating new clusters, applying their configuration and deploying applications into these new clusters can be easily built on top of and into the suite of products which are used for DevOps. This isn’t limited to only the tools on the selected cloud provider, but anything from Red Hat’s partner ecosystem. A couple examples are source code management providers like GitLab and GitHub, and continuous integration/continuous delivery (CI/CD) pipelines products like Cloudbees Jenkins and JFrog Pipelines.
Underlying technology support for cloud services, especially managed ones, is included in the product subscription. Site Reliability Engineering (SRE) teams run these platforms under a defined shared responsibility model. They are the experts in OpenShift and the cloud it is running on. It is what they do all day, everyday.
There is no learning curve for internal teams on how to deploy fully operational clusters that are supported and follow best practices. It is a few clicks in a web browser or a single command line to build a new cluster.
There is no reason to build, or retrain existing, support teams to ensure they know how to monitor and maintain the platform that the mission critical systems run on. This allows the development teams at DSE to focus on what they do best: providing the functionality their internal teams need without needing to deal with alerts like failed hardware at 11:59 PM on New Year’s Eve.
Budgets moving from capital to operational removes things like depreciation and amortization from annual planning, which allows budgets to be rightsized without baggage from previous years limiting flexibility in the future. In addition, there is no life-cycle management that needs to be planned for, no lease buyouts because that system just can’t be migrated before the end date, and so many similar situations.
Security compliance and certification are built into the shared responsibility model where Red Hat and its partner clouds bring their decades of expertise in working across government and private industries to have the most secure profile they can by default. In the background, work is constantly being done to comply with more government and industry requirements like ISO 27001, PCI DSS, HITRUST, FedRAMP High, and SOC 2.
And day-to-day concerns like security patching are also taken care of. Critical security vulnerabilities are patched as they are released. The SRE teams take care of the work involved. Even upgrading clusters to maintain their supportability is handled by the SRE team.
Life-cycle management is also taken care of. Not only are older clusters upgraded as required to maintain their supportability, new clusters are automatically deployed using the latest available versions of the managed cloud service. Cloud services usually run n-1 from the generally available product which allows time to run extra quality assurance cycles and ensure the systems are ready to support DSE’s mission-critical workloads.
To follow the same journey that DSE is now on with Red Hat Cloud Services, there are lots of resources available to help you get started.
The first step is to build on a solid base, which is where managed Red Hat OpenShift comes into play. It is available as a first-party service on Microsoft Azure, AWS, and IBM Cloud so you can get started using your existing cloud relationships.
If you have a relationship with Red Hat already, then Red Hat OpenShift Dedicated is available to be run on Google Cloud and AWS.
First-party managed Red Hat OpenShift services offered by the largest cloud service providers who have integrated support with Red Hat include:
In addition, Red Hat has a growing list of managed services that add specific functionality to address actual business problems without being tied to a specific cloud provider's offerings.
They include: providing API management (including an API gateway and a developer portal), being able to streamline the development and deployment of applications working with real-time data streams, and a fully supported sandbox to allow your data scientists to rapidly develop, train and test machine learning (ML) models.
Managed cloud services offered by Red Hat start with Red Hat OpenShift Dedicated as the base. Then, there are add-on offerings:
We invite you to explore the offering listed here to get started with your journey.