Over the past several months, Red Hat has been diving into one of the most significant shifts in our industry: the practical, large-scale adoption of generative AI (gen AI) within a major engineering organization. We are not unique in this journey, but at Red Hat, "in the open" isn't just a development model—it's our culture. We believe it's important to share what we're doing, what we're learning, and how we see this shaping the future of open source collaboration.

To be clear, this isn't a publicity piece about a single, perfect tool or an instant success. It's a story about culture, choice, and how AI—when grounded in solid engineering principles—becomes a powerful accelerator for open source innovation.

The "why:" AI, Red Hat, and the open source imperative

It's impossible to ignore AI’s hype, but what’s obscured by all that buzz is a fundamental truth—AI is a new layer of the technology stack. It's a capability multiplier, much like compilers, the IDE, and the cloud were before it.

For Red Hat, our "why" is twofold:

  1. Internal acceleration: We have thousands of engineers working on millions of lines of code across thousands of upstream projects. The potential for our engineers to augment their work with AI to reduce toil, accelerate problem-solving, and automate mundane tasks is massive. We owe it to our engineers to provide them with the best tools to do their best work.
  2. The open source future: More importantly, Red Hat’s mission is to be the defining technology company of the 21st century, and we believe open source is the best way to build technology. If AI remains a proprietary "black box" world, it runs counter to everything we stand for. We are pursuing AI to understand it, to "dogfood" it, and ultimately to help build and champion an open, hybrid AI stack—from models to platforms—that our customers and communities can trust.

Our internal adoption is the first step on that journey. We know we have to live it to lead it.

The rollout: Choice and productivity

We learned a couple of lessons early on. The first is that our initial AI policy rollout was sound from a risk and legal perspective, but was hard for engineering teams to follow and use as practical guidance. We took that opportunity to work as a team and refine our policy to bring clarity and determine some of the best practices covered in our previous post. The next lesson was that a single, one-size-fits-all AI tool would fail. An engineer's time is valuable. They have a wide variety of tool and process preferences, and generally don’t move from their setups unless something fits well into their workflows or provides enough value that makes it worth switching. Forcing engineers into one tool or into a sub-par tool is the fastest way to kill adoption. We experienced this first hand with our first attempts to introduce code assistants or even lower level tools for our engineers to build their own assistants. The utility of the tools wasn’t enough to get mass adoption or were too hard to build with, particularly when better solutions were available and easily visible in the market. Many of our engineers tried these tools, but in the end, they didn’t make them more productive.

Since then, our strategy has pivoted to one of choice within a bounded set of best-in-class tool options.

  • For general productivity: Software engineering often brings to mind IDEs, CI/CD, and hours-long hack sessions, but this isn't just about code. A typical week for an engineer or developer includes meetings, reviews, stand ups, tickets, and emails. They don’t live in an editor. To help them manage some of this work, we looked to our office productivity vendor for options and broadly rolled out Google Gemini to our organization. AI tools like Gemini help our engineers with a range of tasks from summarizing complex email threads, to drafting documentation, to understanding new concepts. We found that these are the "low-friction" entry points that get everyone speaking the same language.
  • For the developer desktop: We embrace a multi-tool approach. Some of our teams prefer an IDE experience—like VSCode with open source AI assistant plugins or Cursor—and some prefer terminal assistants—such as Claude Code, Gemini CLI, or aider. Our developers can choose the coding assistant that best integrates with their preferred IDE (VS Code, JetBrains, or even Emacs or Vim). They use these with either open large language models (LLMs) run locally, or with frontier models under agreements designed to uphold data privacy. This respect for each engineer's expertise and workflow has been critical for increasing adoption.

The sandbox: Internal experimentation

Providing tools is one thing, but fostering innovation is another. The most exciting developments have come from the bottom up. We've seen an explosion of internal experimentation in which engineers are using AI to solve their own problems.

Here are a few examples:

  • CI/CD triage: One team built a tool that ingests CI failure logs from a complex upstream project and uses an LLM to identify the most likely cause. The tool then points maintainers to the specific failed test or commit that introduced the regression.
  • AI-assisted backporting: Several teams have either trained small models to aid with project-specific backporting, or are using agentic AI workflows to help backport changes from upstream open source projects to downstream product repositories.
  • Reducing toil: We're seeing AI-powered scripts to refactor boilerplate code, aid our site reliability engineering (SRE) teams, or even suggest optimizations for our performance-sensitive work.

These small, focused experiments are where an enormous amount of value is unlocked. They create a culture in which AI is just another tool in the engineer's problem-solving toolkit. While this can lead to some duplication, it encourages all our engineers to develop the skills needed to work with these new technologies. And it also allows us to compare and contrast solutions to determine best practices. Essentially, we're using grassroots open source practices internally to develop the best solutions to our problems.

The platform: Running it the Red Hat way

This type of experimentation can't happen in a vacuum—it needs a platform. Here we’ve taken a hybrid approach, leveraging frontier models for heavy lifting and using our own products to host applications and local models. Some of our choices include:

  • Red Hat Developer Hub: We use Red Hat Developer Hub (our internal instance of Backstage.io) as the gathering point for our AI work. It catalogs our internal AI projects, provides learning paths, and links to documentation on responsible use. It turns "AI chaos" into a discoverable, managed set of capabilities.
  • Red Hat OpenShift: We use OpenShift as the foundation for our infrastructure. This gives our teams a scalable, protected, and consistent environment to deploy and manage their AI-powered applications.
  • Model serving: We're using vLLM and Red Hat OpenShift AI to serve a variety of open LLMs. This allows us to serve models efficiently, scaling them up or down as needed across a variety of cloud and on-prem infrastructure. We also proxy some frontier models in these platforms to provide temporary sandbox accounts for proof of concept work in order to provide a uniform access method and monitoring.

This approach is critical. Using our own products helps us understand how they operate in real world scenarios and identify gaps, and, ultimately, we take Red Hatters' feedback and use it to improve them. Experimentation and iteration remain core to improving our solutions for ourselves and our customers.

The lessons: AI demands better engineering

This is the most critical lesson for our fellow open source contributors: AI does not replace fundamental engineering practices, it makes them more important than ever.

An AI coding assistant is like the world's fastest, most enthusiastic intern. It’s both super-motivated to help and naively self-assured. It has no context, no "taste," and it can be confidently wrong. You would never let an inexperienced contributor merge code directly to main without review. The same applies here.

We are relentlessly emphasizing our existing open source best practices in the context of AI:

  1. Code review is  non-negotiable: Human review is the ultimate backstop. Is the AI-generated code idiomatic? Does it handle edge cases? Does it introduce a subtle security flaw? Does it align with the contribution guidelines of the upstream project? Engineers are still responsible for the commit.  A follow-on question then becomes can AI assist in the review process?  We believe so.
  2. Trust your (CI) pipeline: The continuous integration (CI) pipeline is an unbiased validator. It doesn't care if a human or an AI wrote the code. It checks for style, runs the linters, and executes the tests. A robust CI system is one of the best safety nets you can have when integrating AI-generated code.
  3. Spec-driven development: Coming up with crisp, clear specifications for your changes with guidelines and guardrails for coding assistants to follow can be very helpful. This practice is quite common in traditional software development as well, but somewhat less prevalent in the open source community, due to the organic nature of “scratch your own itch” and collaborative iterative development. However, providing these specs can lend itself to better alignment in the community and easier review and eventual documentation.
  4. TDD is your guide: Test-driven development (TDD) has found a new superpower. Writing the test first provides an unambiguous specification for the AI. You can feed the test and the function signature to the model and ask it to "make this pass." This turns AI from a "random guesser" into a "focused problem-solver." One caveat we’ve found: LLMs tend to be overly eager to please and will often distill code down to do effectively nothing, just to pass the tests. This is again a case where humans need to orchestrate the AI to make sure it actually solves the problem.
  5. Clear, concise changes: It should already be a standard practice to keep changes cleanly isolated to help facilitate easier review. The Linux kernel community has described this very well in their patch submission process, and it applies very broadly. It’s easy to get carried away with “just one more change” when using AI. So the guideline to keep changes clean and independent becomes doubly important when using these tools.
  6. Understand, don't copy-paste: The biggest danger is using AI to write code you don't understand. We are coaching our engineers to use AI as augmentation and a way to better understand their project, asking it things like, "Explain this function to me," "Why did you choose this approach?" "What are the security implications of this code?" A developer who understands the why behind the code will always be more effective than one who just copies the answer. Similarly, developers need to be able to explain a change, why it’s important, and why it operates the way it does to other developers. We encourage our engineers to submit code upstream only after they feel comfortable with this level of understanding. Nobody wants AI slop.

The journey ahead

Red Hat is still in the early days of this transformation. We’ve learned a lot, but AI technology is evolving at a rate faster than anything we’ve seen in the past. What seemed impossible a year ago is quite common today. What seemed implausible six months ago is starting to prove out. With this pace of innovation, we’re thinking about how to integrate AI more holistically into our overall development process. For example, using it to accelerate the RFE, refinement, requirements, code, and testing aspects to go from request to solution in a much shorter time. Importantly, we’re also beginning to develop metrics to actively measure if and where AI helps based on actual data—not just anecdotes.

At Red Hat, we're committed to learning in the open, sharing our best practices, and contributing to the open source tools that will power this next generation of software. AI is a new kind of component. Let's build it, test it, and integrate it together, the open source way.

Recurso

Introducción a la inteligencia artificial para las empresas: Guía para principiantes

Acelera tu proceso de adopción de la inteligencia artificial con Red Hat OpenShift AI y Red Hat Enterprise Linux AI. Obtén más información al respecto en esta guía para principiantes.

Sobre los autores

Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.

During his more than 20 years as a software engineer, Wright has worked in the telecommunications industry on high availability and distributed systems, and in the Linux industry on security, virtualization, and networking. He has been a Linux developer for more than 15 years, most of that time spent working deep in the Linux kernel. He is passionate about open source software serving as the foundation for next generation IT systems.

Josh is the Technical Advisor for Global Engineering. He joined Red Hat in 2011 and has been part of the Fedora and CentOS communities, the Fedora kernel team, and the RHEL Architect.

UI_Icon-Red_Hat-Close-A-Black-RGB

Navegar por canal

automation icon

Automatización

Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos

AI icon

Inteligencia artificial

Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar

open hybrid cloud icon

Nube híbrida abierta

Vea como construimos un futuro flexible con la nube híbrida

security icon

Seguridad

Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías

edge icon

Edge computing

Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge

Infrastructure icon

Infraestructura

Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo

application development icon

Aplicaciones

Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones

Virtualization icon

Virtualización

El futuro de la virtualización empresarial para tus cargas de trabajo locales o en la nube