Artificial intelligence (AI) projects in the open source community are growing at a pace that is both exhilarating and challenging. Stanford University’s 2025 AI Index Report, presented information on a staggering 4.3 million open source AI projects created on GitHub during the previous year—a 40% jump in just 12 months. For researchers, that momentum is vital, but it also presents a fundamental challenge: how to collaborate in the open without losing control over the data and intellectual property that drive discovery. 

In a research context, it’s not just about who owns the hardware; it’s about having administrative control over your environment. Researchers and scientists live and die by our data, and we need to know that in a shared space, our unique inputs are protected. This ability to independently control and protect a project's digital footprint is what allows for a truly collaborative research environment. Thankfully, a solution has emerged in the research community — a way to bridge the gap through project-level control and a smarter way to manage data gravity.

In my role at Red Hat, I’ve been fortunate to work with the National AI Research Resource (NAIRR) program. Led by the National Science Foundation (NSF), NAIRR was established to evolve US research resources to meet the challenges of AI development, and to help democratize access to the high-end AI compute and data resources typically available to only the largest technology companies. NAIRR, now evolving into a long-term research resource, is creating a scalable national infrastructure that supports researchers and educators from all backgrounds. While this project is a landmark in U.S. research, it also serves as a repeatable example of how projects can maintain independence and transparency in a shared environment.

The architecture of research isolation

NAIRR acts as a watering hole—a model for large-scale efficient infrastructure that eradicates the need for researchers to build thousands of individual copies of resources.  With over 670 projects and participants from across the United States and its territories involved since the project’s pilot, the scale is obvious.  But for a watering hole to be effective, every participant also needs their own space tailored to their specific research needs to thrive. NAIRR uses this approach to help innovators anchor their work in an open source foundation without giving up control. To help maintain these project-level boundaries for researchers in Red Hat’s joint  Deep Partnership Pilot with IBM, the AI Alliance and the Mass Open Cloud, Red Hat provides a software layer that orchestrates the underlying hardware. This gives researchers the tools and capabilities needed to maintain autonomous control over their namespace within a multi-tenant cluster, while still allowing the resource provider to manage the environment efficiently. Researchers working on NAIRR have access to our enterprise-grade AI infrastructure, including Red Hat OpenShiftRed Hat OpenShift AIRed Hat Enterprise Linux, and Red Hat Advanced Cluster Management for Kubernetes.

By providing this consistent stack for the full lifecycle of AI and machine learning experiments, we support a collaborative ecosystem where innovation can thrive without compromising project control. For example, by using Red Hat OpenShift, researchers can assign each project its own Kubernetes namespace and Layer 2 VLAN. They also have the ability to use resource-based access control (RBAC) and create groups with different levels of resource and data access within their own project or in collaboration with others. This technical configuration confirms that even in a shared datacenter, every project has the isolation of its own “room,” protecting its network traffic and data from unauthorized access. This foundation helps researchers move from initial discovery to validated results while maintaining independence throughout the technology journey.

Balancing shared scale and sole ownership

The value of NAIRR’s approach is found in its efficiency and its ability to manage data gravity. By providing a shared open source foundation—the software environment that runs on this infrastructure—Red Hat can help organizations maximize benefits of their infrastructure spend and keep their compute power close to their data. Our contributions are part of a broader, collaborative ecosystem, sitting alongside many other tools and resources that scientists have at their disposal to customize their work.

Since research isn't one-size-fits-all, the level of control required can vary. Sometimes, the isolation researchers need extends beyond software-defined boundaries and to the hardware itself. While many researchers thrive on shared platforms, others need sole ownership of the hardware for specialized measurements, like low-level operating system development or high-precision GPU testing. To support this, NAIRR also provides access to isolated bare metal machines. Whether researchers use containers managed by OpenShift or raw hardware, we provide the stable, secure operational glue that helps them maintain the independence of their work while still participating in this shared community.

Creating space for experimental discovery

A core part of this work is providing a stable environment where we can also explore the future of the intelligence layer. In the era of many AI models, computational power is a scarce global resource, and we need to find ways to use it more efficiently. This is where isolated testbeds become so valuable; they allow us to experiment with new categories like inference routing—intelligently directing simple tasks to cost-effective AI models while reserving massive compute for the most complex problems—without disrupting core research.

For example, one of the projects supported by NAIRR, Multi-Modal Semantic Routing for vLLM, is dedicated to extending the Red Hat-founded vLLM Semantic Router project. This effort focuses on speech-text pipelines and vision capabilities, exploring how to maintain transparent and auditable decision-making logic in an open source environment. By providing these sandboxes, we help specific projects push the boundaries of AI research while ensuring the broader community's stability.

This ability to experiment is particularly vital as the industry shifts toward agentic AI, where AI models move beyond simple generation to perform complex, autonomous tasks. For this trend to move from industry hype to scientific breakthrough, researchers need more than just raw power – they need a conduit for innovation that offers professional-grade lifecycle management and standardization. Just as they require control and guardrails for human access, researchers need the same level of oversight for agents accessing data and computing within their projects.

As the NAIRR ecosystem matures, it provides exactly that environment. Red Hat facilitates this by providing the operational glue—the integrated stack of Red Hat Enterprise Linux, Red Hat OpenShift, and Red Hat OpenShift AI—that allows for a collection of powerful research tools like PyTorch for optimization and SLURM for workload management to function as a unified, high-performance foundation. This allows researchers to build, train, and deploy autonomous agents with the same level of enterprise-level rigor that high-stakes science requires.

The work we're doing with NAIRR is proof that innovating in the open is the best way to build operationally stable and flexible AI, and shared resources and individual control aren't at odds. By providing a foundation of open source tools that support project-level boundaries, Red Hat is helping demonstrate that the future of AI can be both collaborative and independent.


About the author

Heidi Picher Dempsey is the Research Director, Northeast US for Red Hat. She works to seek out and grow research and open source projects with academic and commercial partners in areas such as operating systems, hybrid clouds, performance optimization, networking, security, AI and operations. As a network engineer and operations leader, she designed, built, integrated and operated many different nationwide suites of prototype cloud infrastructure for academic, government and industry use, including the National Science Foundation's GENI project clouds. As part of the CTO Research program, she encourages diverse participation in computer science and engineering research, and promotes collaborations with Red Hat researchers.

UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Virtualization icon

Virtualization

The future of enterprise virtualization for your workloads on-premise or across clouds