In the face of changing technology demands, local municipalities and federal governments alike can struggle to keep existing infrastructure operational while striving to meet the growing need to support their communities with advanced technologies. These can include 5G, artificial intelligence (AI) / machine learning (ML) and Internet of Things (IoT), all critical pieces that meet constituent demands for better, faster and more efficient services, but also come with steep IT requirements. 5G infrastructure alone necessitates an unprecedented physical footprint at a street and building level in order to serve the number of IoT devices anticipated to be operating on 5G networks. That number is projected to be as high as 1,000,000 devices per square kilometer (roughly the size of four city blocks).

IoT and 5G technologies are key components in creating smart cities, where data from sensors, cameras, and specialized connected devices must be processed in real-time to provide insight and assistance with traffic congestion management, crime prevention, and asset and property maintenance. But smart cities are just one symptom of a growing challenge facing public sector organizations. The bigger question is: How do these organizations address the need for computing demand outside their core datacenter, at the literal edge of the network? Adding to this complexity is the proliferation of microservices-based, cloud native applications running on container management Kubernetes platforms, a wholesale sea of change in how traditional IT operations are conducted.

With unprecedented user demand driven by new technologies, navigating this unfamiliar landscape can be a significant challenge for public sector organizations that want to be next-generation ready but also want to remain committed to open technology standards that internal IT teams can maintain.  

At Red Hat, we’ve recognized the need to support innovations such as IoT, ML, and other emerging technologies with a set of open and standard platforms, including Red Hat Enterprise Linux (RHEL), Red Hat Ceph Storage and Red Hat OpenShift. These solutions are capable of running a wide variety of applications and serving as a foundation for many of the intelligent solutions required to analyze and address today’s (and tomorrow’s) IT challenges. 

Traditional hardware often struggles to keep pace with these demands. Computer systems now have to offer specialized capabilities to support compute-intensive operations found in areas such as data analytics and AI. Our partner, NVIDIA, helps to address this challenge through hardware and software that accelerates workload processing using specialized computational resources -- graphics processing units (GPUs) -- to tackle intense computing tasks.

Using these workload-accelerating components may sound complex. In reality, Red Hat and NVIDIA are collaborating to provide platforms that abstract the hardware underneath and simplify deployments. Together, we are working to deliver a standardized and familiar software stack that can be run equally well as a part of the back-office infrastructure for mission-critical tasks and with devices at the edge of a network. 

Red Hat has been helping partners build solutions based on our open source software platforms for years, so what makes our collaboration with NVIDIA different? 

Both companies recognize the need to deliver standardized, accessible infrastructure based on a robust and scalable software stack. In addition to making some of the most high-performing GPU hardware in the market, NVIDIA has been at the forefront of bringing a consistent software stack that enables accelerated computing for various industries and verticals. The NVIDIA CUDA parallel computing architecture, CUDA-X libraries, and software tools run on a variety of platforms, ranging from the credit card-sized Jetson Nano to the massive DGX-2 AI supercomputing system. These innovative hardware platforms and the accompanying CUDA software have been embraced by a community of more than 1.2 million developers for accelerating applications across a broad set of domains — from AI to high-performance computing to the telecommunications industry.

NVIDIA and Red Hat are working with several OEM partners to bring pre-configured hardware and software solutions for accelerated AI deployments to end-users. These solutions enable developers to more effectively design, test, and validate their applications and AI/ML models using programming languages and frameworks of their choice. They also shorten the path to production deployments and empower IT personnel to efficiently manage and maintain computationally demanding AI applications in their infrastructure.

Red Hat and NVIDIA are working to enable RHEL and OpenShift across NVIDIA’s portfolio, including the DGX line of servers, which are designed to deliver high training performance for the most complex AI challenges. Additionally, the companies recently announced support for NVIDIA EGX platforms, which are designed to address the set of challenges presented by edge computing. These EGX systems run high-value AI-enabled services such as augmented and virtual reality (AR/VR), IoT, and remote healthcare, and can be deployed at the edge of telco networks for 5G implementations. With support for NVIDIA Aerial, a GPU-accelerated 5G radio access network software developer kit running on top of EGX, Red Hat and NVIDIA enable telecommunications companies to build completely virtualized 5G RAN networks.

Fundamentally, we are seeking to simplify deployments of AI infrastructure across all domains, by bringing accelerated and standardized data science, ML, and deep learning (DL) workflows to datacenters and edge -- and anywhere in between. We believe that the future of AI infrastructure is based on cloud native open source technologies, and we’re excited to continue to collaborate with NVIDIA to make this future a reality.

Red Hat at GTC DC 2019

To learn more about how Red Hat and NVIDIA align on open source solutions to fuel emerging workloads, visit Red Hat (booth #208) at the GPU Technology Conference (GTC) in Washington, D.C. from November 5-6, 2019. Our team of experts will be on hand to answer questions and provide additional information about Red Hat’s product portfolio

At the event, Red Hat will present the following session: Accelerating AI and Machine Learning with Containers and the Kubernetes Platform from Red Hat - Wednesday, November 6, 2:00 PM to 2:25 PM in Oceanic room.

We also encourage you to stop by our booth to see the following demonstrations:

  • NVIDIA’s Metropolis application framework for smart cities running on OpenShift
  • Real-time object detection that runs NVIDIA GPU-accelerated RHEL-based containers with OpenCV and PyTorch 
  • Image inferencing of flowers using neural network running on OpenShift cluster with NVIDIA GPUs

About the author

Yan Fisher is a Global evangelist at Red Hat where he extends his expertise in enterprise computing to emerging areas that Red Hat is exploring. 

Read full bio