Red Hat aims to speed the telecom industry’s shift to open radio access network (RAN) solutions by announcing support for NVIDIA GPUs for 5G vRAN on Red Hat OpenShift, which will enable containerized RAN deployments on industry-standard servers, on any cloud. The solution powered by NVIDIA GPUs removes the need for bespoke hardware and runs on Red Hat OpenShift on standard Kernel. This will not only lower the TCO for customers but will also help accelerate network deployments across any cloud while setting the stage for multi-tenancy and RAN-as-a-service offerings.
Red Hat solutions with NVIDIA technology help service providers capitalize on the benefits of edge economics and vRAN. The Red Hat-sponsored ACG report found that using a common horizontal infrastructure in both 5G core and RAN edge computing equips operators to extend total cost of ownership benefits throughout their end-to-end infrastructure. Integrations with GPUs via Red Hat-certified NVIDIA GPU Operator means that OpenShift can seamlessly meet the high compute resource requirements to run machine learning (ML) jobs and RAN applications on the same OpenShift node.
Our collaboration prepares providers to develop, deliver, and support new applications and services based on location awareness. Our solutions are designed to resolve lag and interruptions in 5G networks. With our vRAN solutions, telcos are able to achieve scalability and operational efficiency in the distributed cloud.
5G provides additional monetization and growth opportunities with network-as-a-platform to communication service providers (CSPs). The end-to-end (E2E) 5G network architecture comprises a modern radio access network (NG-RAN, comprises of Traditional New Radio or disaggregated RAN such as virtual RAN, cloud RAN or open RAN), multiaccess edge computing (MEC), 5G core service-based architecture (SBA), and IMS core and application services layer transitioning into service-based interface (SBI)—all built on innovative AI/ML-enabled cloud-native telco cloud.
The transition from traditional RAN with proprietary vendors involves addressing complex BaseBand Unit (BBU) architecture and decoupling software and hardware by introducing network functions virtualization (NFV). The solutions offered by Red Hat and NVIDIA include redesigned architecture for virtual BBU, which required splitting its functionality into two components: the centralized unit (CU) and distributed unit (DU). The split has placed most of the real-time processing tasks done by the BBU on the DU side, which include MAC scheduling, error correction through retransmission, beamforming, segmentation, re-segmentation, and modulation. The CU side includes management and control-related functions. To meet latency requirements, DU is designed to be placed close to the Radio Unit (RU); the DU is connected to the RU through the Fronthaul network with Common Public Radio Interface (CPRI) used as the dominant form of sample transport between DU and RU. As the CU can tolerate higher latency, it can be placed further from the RU and DU where a group of CUs can be pooled in a centralized edge location. The connection between CU and DU is called Midhaul where the F1-C and F1-U interfaces terminate to and from the CU.
Given the complexity of RAN virtualization and the need for vendor interoperability, the O-RAN ALLIANCE has worked on providing a standardized model for the cloudification and orchestrating the RAN components. The orchestrating and management of virtualized RAN requires a vendor-neutral approach that achieves slice management, scalability, fault tolerance, and uninterrupted upgrade.
The NVIDIA Aerial™ SDK provides a 5G wireless RAN solution, with inline L1 GPU acceleration for 5G NR PHY processing. It supports a full-stack framework for a gNB integration L2/L3 (MAC, RLC, PDCP), along with manageability and orchestration. The Aerial SDK also supports non-5G signal processing use cases. It simplifies building programmable and scalable software-defined 5G RAN PHY with the following two components:
- CUDA Baseband (cuBB): The NVIDIA cuBB SDK provides GPU-accelerated 5G signal processing pipeline including cuPHY for Layer 1 5G PHY, delivering unprecedented throughput and efficiency by keeping all physical layer processing within the high-performance GPU memory.
- DOCA GPUNetIO: A library to enable GPU-initiated communications so a CUDA kernel can invoke the CUDA device functions in the DOCA GPUNetIO library to instruct the GPUDirect-capable network cards (NVIDIA ConnectX-6 DX or A100X converged accelerator) to send or receive packets.
Figure 1: NVIDIA DOCA GPUNetIO with CUDA drivers and libraries installed on the same platform.
The diagrams below show the NVIDIA cuBB SDK software and hardware components.
Figure 2: cuBB SDK software and hardware components and inline connectivity between NIC and GPU.
cuPHY includes the GPU-accelerated 5G PHY layer software library and SDK examples. It provides GPU-offloaded 5G signal processing.
cuPHY-CP is the cuPHY control-plane software that provides the control plane interface between the layer 1 cuPHY and the upper layer stack.
NVIDIA Aerial does a full inline offload of 5G PHY layer (layer 1). The two innovations — GPUNetIO and GPU Direct RDMA driver in Data Plane Development Kit (DPDK) provide optimized, high-performance and real-time layer 1 processing in the GPU. Since layer 1 is fully offloaded to the GPU and there is no CPU interaction, it eliminates the need for a real-time kernel running in the CPU.
A Linux real-time kernel is only needed in the CPU if some part of layer 1 is running on the CPU, which is not the case with the Aerial layer 1 GPU-optimized solution.
A composable infrastructure helps to streamline resources and optimize existing IT environments by removing the need for specialized, space-consuming hardware and specialized software. NVIDIA GPUs, DPUs and converged accelerators can take the load off of traditional CPUs by disaggregating compute, storage and networking resources to process packets faster and with greater security measures by isolating infrastructure-heavy tasks.
With Red Hat support of NVIDIA converged accelerators and the NVIDIA Aerial SDK, customers can benefit from:
- Lower total cost of ownership (TCO), reducing overall systems costs associated with deploying and maintaining RAN and AI at scale.
- Greater acceleration of network deployments across the hybrid and multicloud, setting the stage for multi-tenancy and RAN-as-a-service.
- Composable infrastructure, which mitigates the need for specialized hardware, which, when combined with Red Hat Enterprise Linux, provides low latency and enhanced consistency.
- Connectivity for billions of devices, extending the reach of AI capabilities and applications to all devices at the edge.
- The acceleration of additional use cases including multi-access edge computing (MEC), autonomous vehicles, industrial and agricultural, by enabling AI and ML at the edge.
For additional information, see the press release and visit www.redhat.com/nvidia.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit