Abonnez-vous à notre blog

Enterprises and telecommunication providers are looking at the edge as the newest IT footprint, observing the development of intelligent edge applications and monitoring the shift of workloads from traditional datacenters to the outer boundaries of public and private networks. The common realization is that bringing processing power and storage closer to the end user or data source is imperative to delivering high value services, scaling across geographically distributed locations and providing a faster, more satisfying service experience.

Despite edge being somewhat of an opposite to the cloud from a datacenter point of view, it is much closer to “home” if you are operating outside of traditional enterprise boundaries. Yet in the context of the open hybrid cloud, the concept of edge computing is fully embraced. A large number of physical devices operating at the edge look somewhat like a cloud, especially since they have to work and be managed in unison, even if each one of them is performing its own set of tasks.

Technological innovation at the edge is a critical component of Red Hat’s strategy. Intelligent cloud-native applications are transforming the enterprise edge in every industry and helping bring innovation into new markets. Some estimates put the number of edge devices to be 29 billion by 2022 and we, along with the rest of the IT industry, are preparing to deal with the associated complexities. We are working on simplifying at-scale deployments and the management of edge devices and collaborating with our partner ecosystem to take advantage of any enhanced processing capabilities needed to successfully manage computational demand.

Edge evolution through an open collaboration

One of our key AI platform partners, NVIDIA, has been developing innovative technologies and capabilities to be deployed at the edge. The NVIDIA EGX Edge AI platform has evolved to include systems varying from traditional servers, built by major OEMs, to supercompact microservers and edge IoT boxes. All of these solutions include powerful GPUs intended to accelerate the AI/ML and data processing, and are supported by various application frameworks, like NVIDIA Metropolis and NVIDIA Clara, creating an AI-enabled horizontal platform.

The recently announced NVIDIA A100 Tensor Core GPU, based on the NVIDIA Ampere architecture, aims to bring significant performance improvements and the ability to partition a physical GPU device into several discrete GPU instances. Multi-Instance GPU (MIG) technology can serve up to seven AI/ML or compute intensive workloads simultaneously on a single NVIDIA A100 GPU device, running anywhere from the cloud to the edge. We believe that for MIG to be truly transformational, support from the underlying hypervisor, operating system or container platform is required, which is where our collaboration with NVIDIA shines.

From simplified GPU driver packaging in Red Hat Enterprise Linux to testing MIG with vComputeServer on Red Hat Virtualization to certified GPU Operator support for Red Hat OpenShift, our engineering teams are collaborating on making sure that both software and hardware are running optimally across all deployment footprints. By simplifying deployment and management of the infrastructure we are helping our joint customers focus on addressing the demanding challenges presented by AI, edge and 5G workloads.

Enhancing networking at the edge with NVIDIA Mellanox technologies

With its recent acquisition of Mellanox, NVIDIA now brings significant expertise in networking and interconnect technologies, expanding the range of joint projects. One of the first examples  is the introduction of the NVIDIA EGX A100 converged accelerator. This product enables edge servers to process and enhance protection for massive amounts of data like encrypted AI models or real-time streaming data from edge devices while providing direct network access to GPUs. Servers powered by the NVIDIA EGX A100 open up a new range of supported edge choices for our customers, and we are already working with NVIDIA on software support for this new category of accelerator devices.

That enablement effort is focused on creating a special resource Operator for the onboard NVIDIA Mellanox ConnectX-6 Dx SmartNIC and is aided by the availability of precompiled GPU driver packages and Red Hat OpenShift Certified Operator for NVIDIA GPUs. The net result of this work can help customers to simplify deployments and management of the new NVIDIA EGX A100 hardware in their infrastructure and accelerate adoption of new workloads, like low-latency signal processing required for 5G networks or decrypting high-resolution video streams at the line rate for AI processing at the edge.

Hybrid cloud deployment example of NVIDIA Metropolis processing multiple video streams using EGX platform at the edge with OpenShift running in the public cloud

Red Hat and NVIDIA recognize the need to deliver standardized, accessible infrastructure based on robust and scalable software stacks, and to make it easier for a wide range of developers to create AI-enabled applications. We are working together to enable the next wave of edge hardware innovations like NVIDIA EGX A100 and MIG across all of our platforms, from the operating system to virtualization to OpenShift.

At the same time, we are striving to enhance the user experience around datacenter technologies by enabling Red Hat software platforms on NVIDIA DGX systems and collaborating with our OEM partners, like Hewlett Packard Enterprise (HPE), to streamline the adoption of AI-enabled infrastructure. This can help our mutual customers more quickly design and deploy AI-based applications in production.

Learn more about our joint work for the hybrid cloud on "Accelerate AI with NVIDIA and Red Hat."


À propos de l'auteur

Yan Fisher is a Global evangelist at Red Hat where he extends his expertise in enterprise computing to emerging areas that Red Hat is exploring. 

Fisher has a deep background in systems design and architecture. He has spent the past 20 years of his career working in the computer and telecommunication industries where he tackled as diverse areas as sales and operations to systems performance and benchmarking. 

Having an eye for innovative approaches, Fisher is closely tracking partners' emerging technology strategies as well as customer perspectives on several nascent topics such as performance-sensitive workloads and accelerators, hardware innovation and alternative architectures, and, exascale and edge computing.  

Read full bio

Parcourir par canal

automation icon

Automatisation

Les dernières actualités en matière de plateforme d'automatisation qui couvre la technologie, les équipes et les environnements

AI icon

Intelligence artificielle

Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement

cloud services icon

Services cloud

En savoir plus sur notre gamme de services cloud gérés

security icon

Sécurité

Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies

edge icon

Edge computing

Actualité sur les plateformes qui simplifient les opérations en périphérie

Infrastructure icon

Infrastructure

Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde

application development icon

Applications

À l’intérieur de nos solutions aux défis d’application les plus difficiles

Original series icon

Programmes originaux

Histoires passionnantes de créateurs et de leaders de technologies d'entreprise