At Red Hat, our IT and Engineering functions encounter the same challenges and make the same decisions our customers face every day, from infrastructure optimization and application delivery to automating and enhancing the security of our global business. Right now, almost every organization we talk to is navigating the complexities of an AI journey, and we’re in that same boat. As users of our own products—because we love and believe in the technology we build—we want to pull back the curtain on our internal experience. We hope that the lessons we’ve learned through some foresight and some trial and error might help you navigate your own path.
Standardizing a fragmented infrastructure
Our move toward AI didn't begin with a model. It began with a massive cleanup of our technical debt. A few years ago, Red Hat’s IT department was struggling to manage a fragmented landscape of virtual machines and containers across multiple platforms—including Red Hat Virtualization, Red Hat OpenStack, and the public cloud. This fragmentation meant that we lacked a consistent way to deploy or manage workloads. Simple tasks were slowed down by "it works here, but not there" bottlenecks, creating constant operational friction. We realized that speed and innovation are impossible when you’re fighting your own infrastructure every day.
To solve this challenge, we migrated all our workloads to Red Hat OpenShift, creating a single environment across bare-metal and public cloud environments. We migrated virtualized workloads from Red Hat Virtualization and Red Hat OpenStack Platform to Red Hat OpenShift Virtualization, now using Red Hat OpenShift AI, part of the Red Hat AI portfolio, for our AI workloads. Our workloads are connected through automation with Red Hat Ansible Automation Platform. And we have standardized all of this on Red Hat Enterprise Linux (RHEL) and maintain it as our core build across all of our platforms. RHEL gives us simplicity, but also the highest level of security.
This foundation gave us the architectural flexibility to run our own stack alongside third-party tools on one unified hybrid cloud platform. Because we cleaned up our infrastructure first, we had the infrastructure stability we needed when it was time to start experimenting with AI.
Cleaning up our data to find the truth
Even with a stable platform, we recognized that the value of AI is tethered to data quality. Like many large organizations, we suffered from significant data duplication and accumulated complexity. At one point, we discovered we had more dashboards than we had employees (which might sound familiar to some of you). And in one case, we found 73 different “source of truth” spreadsheets in a single department. We realized we couldn’t build a reliable AI strategy on top of unreliable, fragmented data. That’s because the output of any AI tool will only ever be as good as the information the model processes.
We spent the better part of 2 years doing the difficult work of data hygiene. You simply can’t avoid this step. The result is a single source of truth that unifies these disparate streams and runs on top of Red Hat OpenShift and Red Hat Enterprise Linux. This internal data and AI platform orchestrates the flow of information across our business systems to generate analytic insights. It has set us up to massively scale our agentic workloads.
By running this platform on Red Hat OpenShift, we can keep our compute in-house while we’re using external data vendors. We’ve also built a significant amount of security and compliance into the in-house compute, so we can avoid vendor lock-in and reimplementation. By “drinking our own champagne,” we’ve figured out how to put our strengths to use while taking advantage of our vendors’ data systems.
Laying the groundwork for what's next
Cleaning up our infrastructure and data was a fundamental shift in how we operate. We learned that you cannot bypass the "boring" work of standardization if you want to reach the "exciting" work of AI innovation. By establishing a single, consistent environment on Red Hat OpenShift and a governed data layer in our internal data and AI platform, we moved from a reactive state of managing "messes" to a proactive state where we could finally focus on the future of our workforce.
In Part 2, we will explain how we moved from rigid policy to associate empowerment and detail the 3 pillars defining our internal AI strategy.
Essai de produit
Red Hat OpenShift Container Platform | Essai de produit
À propos des auteurs
Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.
During his more than 20 years as a software engineer, Wright has worked in the telecommunications industry on high availability and distributed systems, and in the Linux industry on security, virtualization, and networking. He has been a Linux developer for more than 15 years, most of that time spent working deep in the Linux kernel. He is passionate about open source software serving as the foundation for next generation IT systems.
Marco Bill is Senior Vice President and Chief Information Officer at Red Hat. Throughout his career at Red Hat, Marco’s mission has been to improve productivity, business outcomes and create capacity for the company’s growth through the modernization of business processes and technology.
His current role includes leading all IT functions, and Information Security & Risk.
Marco has more than 30 years of experience in the IT and support delivery fields. During his time at Red Hat, he has led Application Transformation, Customer Success & Services, and Customer Experience teams, challenging the standard industry definition of support, with a strategic focus on innovation that better serves customers and delivers more customer value.
Prior to joining Red Hat, he held various engineering and support roles at Hewlett-Packard, Compaq, and Digital Equipment Corp. in the United States, Europe, and Asia.
Plus de résultats similaires
AI optimization: 7 powerful techniques you can use today!
233% 3-year return on investment and 13 months to payback with Red Hat AI
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
Parcourir par canal
Automatisation
Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements
Intelligence artificielle
Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement
Cloud hybride ouvert
Découvrez comment créer un avenir flexible grâce au cloud hybride
Sécurité
Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies
Edge computing
Actualité sur les plateformes qui simplifient les opérations en périphérie
Infrastructure
Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde
Applications
À l’intérieur de nos solutions aux défis d’application les plus difficiles
Virtualisation
L'avenir de la virtualisation d'entreprise pour vos charges de travail sur site ou sur le cloud