In many open source communities, there’s a fair amount of skepticism around the use of generative AI (gen AI) tools for contribution and development. There are valid reasons for concern. Our goal in this article, and in Red Hat's own practice, is to address those concerns directly and not dismiss them. Our answers aren't just advice for others—they enable our own engineers, most of whom are open source contributors as well.
We'll share with you the guidelines we've established for Red Hat engineers, based on our use of open source principles in practice. But first, we'd like to put the current wave of new tools into context.
A little historical context
For the last 4 decades, we've been regularly implementing new and improved tools and processes for software development. You name it: Compilers, version control systems, IDEs, virtual machines (both kinds), cloud instances, agile development, containers, configuration management, and automated testing. Every set of tools was once new, and many of them triggered heated arguments about authorship, quality, and legitimacy. There was a time when both compiler flags and auto-complete in IDEs were hot-button issues.
AI-based development tooling is no different. Nor should it be. Over time, we’ll find that AI tools improve our development lives tremendously in some areas and not at all in others, and adoption will proceed accordingly. We use tools to solve problems in open source, and the new tools will help us solve old problems while discovering new ones.
If there’s a core problem in the world of open source, it can be expressed as, "too many projects, not enough maintainers." Today's project leader needs to do more than ever: faster releases, quicker security updates, secure software supply chains, CI/CD, regulatory compliance, and large-scale contributor management. These expectations are not sustainable without better tooling that helps maintainers to do more with less effort. Through principled use of AI, Red Hat believes that we can build the next generation of developer tools to meet this challenge.
Principles of AI adoption in open source
In order for the new tools to benefit open source, we need to adhere to the open source ethos that has built Red Hat and our industry. Accordingly, Red Hat has developed guidelines for AI-based open source contribution for our staff that are based on 3 principles:
- Innovate responsibly
- Be transparent
- Respect the community
Innovating responsibly
Regardless of whether they’re using an AI tool, an IDE, a pair programming session's output, or any other method of producing code and docs, each contributor is fully responsible for what they contribute. The individual contributor is the human-in-the-loop who vouches for the quality, security, and compliance of the contribution. Contributors should understand the AI-assisted code just as if they wrote it entirely themselves. They should also be able to explain what it does, how it interacts with other code in the project, and why the change is necessary. We don’t see AI as a replacement for developers. The goal is to automate tedious tasks to free them up for complex, creative problem solving. We believe in a future where developers are amplified, not automated.
Our principle of human accountability reframes AI as a powerful assistant and tutor, not a replacement. A newcomer can use it to understand complex boilerplate and learn best practices, allowing them to focus on the core logic of their contribution while making fewer mistakes. Senior contributors can use new tools to perform more efficient and thorough review and testing. The responsibility remains with the people—senior members must mentor the contributor, not just the code, and junior members must be accountable for what they submit and demonstrate a willingness to learn.
Being transparent
Openness fosters trust. Marking substantial AI-assisted contributions, such as with an "assisted-by" line in the commit, helps communities develop best practices together and allows for auditing if issues arise. This also permits projects to learn, over time, which AI tools are helpful for their project development and which aren't working for them.
Marking contributions also helps reviewers evaluate new contributions appropriately. Low-quality AI-generated contributions are a serious problem for projects. Red Hat will continue to work on practices and tools that we'll share with the whole ecosystem as we learn how to better address these challenges.
Respecting the community
Effective collaboration in open source relies on respecting each project's established contribution policies and social norms. Our first responsibility is to understand and engage with a community's chosen process for adopting new technologies like AI—or to help start discussion about creating such a process where one doesn’t exist. In other words, contributing to the conversation rather than attempting to dictate it.
We know that some projects will welcome new tooling, some will prohibit it, and some will adopt specific policies around marking and acceptable uses. Where we can, Red Hat will help communities develop and adopt policies that help them maintain their community values, health, and quality standards. The key consideration is for projects to be able to use AI tools in a way that works for them.
Innovation in action at Red Hat
Our use of AI-powered automation for maintaining Red Hat Enterprise Linux (RHEL) packages is a real-world example of innovating responsibly. As detailed in this blog post by Laura Barcziová, building a reliable production system required a deep focus on accountability. The engineering team built in critical safeguards, such as dry-run modes and detailed tracing, so that a human can always understand and audit the AI's decisions. This focus on building for reliability and enabling human oversight is key to responsible innovation.
The Fedora Project's AI-Assisted Contributions Policy process is a powerful example of transparency and respect for community governance. Developed through extensive public debate, it requires accountability and disclosure, serving as a model for how open source projects can create their own clear, pragmatic guidelines for AI.
Open source is about principled innovation
Red Hat believes that AI offers tremendous opportunities for open source projects and contributors. We are committed to evolving our ecosystem in a way that preserves key open source principles. This commitment is rooted in a simple truth—our entire product portfolio is built on the innovation happening in upstream open source projects. The health, vibrancy, and productivity of these contributor communities are not just a priority, but are the very foundation of our success.
Our product strategy reflects this commitment—from delivering an enterprise-grade AI platform with Red Hat AI, to embedding AI capabilities across our entire portfolio, to sharing our own process innovations and discoveries that we use to improve quality and security.
This is a collaborative process, and we are approaching it with transparency. We’re tackling longstanding problems in open source that are bigger than Red Hat. We invite you to join us on this journey as we work with upstream communities to build the tools, define the standards, and shape the future of software development together.
Recurso
La empresa adaptable: Motivos por los que la preparación para la inteligencia artificial implica prepararse para los cambios drásticos
Sobre los autores
Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat. Wright leads the Office of the CTO, which is responsible for incubating emerging technologies and developing forward-looking perspectives on innovations such as artificial intelligence, cloud computing, distributed storage, software defined networking and network functions virtualization, containers, automation and continuous delivery, and distributed ledger.
During his more than 20 years as a software engineer, Wright has worked in the telecommunications industry on high availability and distributed systems, and in the Linux industry on security, virtualization, and networking. He has been a Linux developer for more than 15 years, most of that time spent working deep in the Linux kernel. He is passionate about open source software serving as the foundation for next generation IT systems.
Más como éste
Implementing best practices: Controlled network environment for Ray clusters in Red Hat OpenShift AI 3.0
Solving the scaling challenge: 3 proven strategies for your AI infrastructure
Technically Speaking | Platform engineering for AI agents
Technically Speaking | Driving healthcare discoveries with AI
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Virtualización
El futuro de la virtualización empresarial para tus cargas de trabajo locales o en la nube