Log in / Register Account

Virtual event

Red Hat at NVIDIA GTC

Watch recording from April 12, 2021
Jump to section

EVENT OVERVIEW

To experience Red Hat’s open source solutions that fuel many emerging workloads, including AI/ML/DL, join the online presentations at NVIDIA GTC. Red Hat’s experts will showcase how scalable software infrastructure from Red Hat can be deployed in a range of scenarios, from virtualized environments in corporate datacenters to massive-scale services on public clouds and all the way to the edge.

RED HAT AND NVIDIA

Red Hat and NVIDIA are seeking new ways to enable the latest technological innovations for our ecosystem. We are collaborating on creating scalable solutions that accelerate a diverse range of workloads, from deep learning training to data analytics. 

We share a vision of IT’s future as one fueled by open source technologies and are working together to enable our customers to run their businesses on any footprint, from bare-metal and virtualized deployments in corporate datacenters to massive-scale services deployed on public clouds to distributed applications at the edge, using familiar infrastructure.

SPEAKING SESSIONS

Connect with Red Hat by joining our speaking sessions at NVIDIA GTC

(SS33031)​ Composable compute infrastructure for modern hybrid cloud with Kubernetes

Speakers:
Derek Carr, Architect, Red Hat OpenShift, Red Hat
Jon Masters, Computer Architect and Distinguished Engineer, Red Hat

    

When: Tuesday, April 13
11:00 a.m. PST / 2:00 p.m. EST

Abstract: The complexity of modern software architectures is creating demands on system scalability. Expanding the hybrid cloud to the edge of the network while supporting data and AI workloads requires new approaches to hardware. System architectures are transitioning away from the general purpose computing model. Adding more CPU is not sufficient and offloading with traditional hardware acceleration approaches cannot meet the demand 

To meet scalability demands,  a composable compute model is needed, where a system is aggregated from a general purpose CPU together with specialized HW subsystems with their own CPUs that provide software-defined device functionality, security isolation, and offloading and acceleration of subsystems. 

Join this session to see how Red Hat is leading system architecture innovation in open source and how RedHat® OpenShift® and Red Hat Enterprise Linux with a broad set of partners and communities are enabling these exciting new capabilities. We will discuss our vision, use cases, architectural approach, and future with valid examples.

(S32387) ​Accelerating Kubernetes-powered hybrid cloud for ultimate flexibility and efficiency

Speakers:
Chris Wright, Senior Vice President and CTO, Red Hat
Kevin Dierling, Senior Vice President, NVIDIA Networking

When: Tuesday, April 13
9:00 am PST / 12:00 pm EST

Abstract: The demand for computational power continues to grow year over year following industry’s quest for supporting an ever-increasing number of applications. The traditional computer architectures are no longer able to keep pace with ever-growing demand. The future computing infrastructure needs to be dynamic, accelerated and secure in nature, just like the containerized software today is composable and on-demand. 

Data processing units (DPUs) deliver powerful computational and acceleration capabilities for hybrid clouds. They run critical networking, management and security functions with utmost performance while freeing up the server CPU cores to run traditional business applications and cloud native workloads.  

In this session, we will share Red Hat and NVIDIA joint vision of Kubernetes-powered hybrid cloud based on open source software and discuss how software-defined hardware-accelerated infrastructure managed by Red Hat OpenShift delivers the next generation of data center services including OVN based SDN, full security isolation, encryption, firewall, DPI, storage and much more.

(SS33023)​ How Red Hat’s hybrid cloud platform and NVIDIA GPUs help accelerate businesses from autonomous driving to digital banking 

Speaker:
Abhinav Joshi, Senior Manager, Product Marketing, Cloud Platforms Business Unit, Red Hat

Video on-demand

Abstract: Business leaders desire data-driven insights to help improve customer experience. Data engineers, data scientists, and software developers desire a self-service, cloud-like experience to access tools/frameworks, data, and compute resources anywhere to rapidly build, scale, and share results of their projects to accelerate delivery of AI-powered intelligent applications into production. 

This session will provide an overview of how Red Hat® OpenShift®, an open hybrid cloud platform powered by Kubernetes, and NVIDIA GPUs helped BMW Group, with autonomous driving initiatives, and top global financial services company Royal Bank of Canada (RBC), with machine learning research and development.

(SS33020)​ Real examples of workloads benefitting DPU/SmartNIC technology running Red Hat Enterprise Linux  

Speakers:
Anita Tragler, Technical Product Manager for Networking and NFV, Red Hat
Andre Beausoleil, Senior Principal Partner Manager, Red Hat 

    

Video on-demand

Abstract: Learn about today's use cases and solutions usingDPU technology by NVIDIA and the capabilities of Red Hat® Enterprise Linux®. Network interface controller (NIC) advancements allow hardware to accelerate network traffic by offering packet processing capabilities and crypto offload,  provide NVME control of high speed storage blocks over storage fabric with the ease of bare-metal provisioning, and implement secure endpoints to follow specified security network protocols. 

Red Hat Enterprise Linux allows operating environment standardization across open hybrid cloud technology, bringing the capabilities of cloud on-premise. Join us to hear about use cases for DPUs and how you can deploy Red Hat Enterprise Linux as a building block toward a future of composable compute infrastructure.

(SS33018)​ GPU Accelerated Federated Learning in energy with Red Hat OpenShift and Red Hat Data Services

Speakers:
John Archer, Chief Architect of Energy, Red Hat
Brian Barran, NVIDIA

 

Video on-demand 

Abstract: Seismic data processing capabilities represent how oil and gas organizations determine reservoir structures and design wellbore paths to extract hydrocarbons. The influx of data science projects within energy firms has created the demand for resources of differentiated compute, storage, network, and memory capabilities.

The ability to run traditional HPC applications and jobs, alongside GPU-using data science workloads in the same control plane, can provide a modernized approach to scaling data science and get more value out of an energy’s organization HPC assets. 

In this session, we’ll discuss how to use Red Hat® OpenShift® Container Platform and Red Hat storage capabilities to build Federated Learning pipelines to process seismic and other critical data science workloads in the field or in-country without having to move the data. We will also review how the seismic and wellbore data could be served from a global distributed hybrid OSDU tenant.

(S31309)​ Speed Red Hat OpenShift Container Platform with high-performance and efficient networking

Speakers: 
Marc Curry, Senior Principal Product Manager OpenShift, Red Hat 
Erez Cohen, Cloud Programs, NVIDIA

    

Video on-demand

Abstract: Looking to turbocharge the performance of your OpenShift Container Platform networking with a SmartNIC?  In this session, you will learn how an integration with NVIDIA Mellanox Networking turbocharges OpenShift with hardware-accelerated, software-defined cloud-native networking.  Cloud-native applications based on Kubernetes, containers and microservices are rapidly growing in popularity. These modern workloads are distributed, data intensive, and latency sensitive by design. Therein lies the need for fast and super-efficient networking to achieve a predictable and consistent user experience and performance while using cloud-native applications.  NVIDIA and Red Hat work together to boost the performance and efficiency of modern cloud infrastructure, delivering a premium customer experience to enterprises and cloud operators alike.

(S31380)Implementing virtual network offloading using open source tools on BlueField2

Speakers:
Rashid Khan, Director Software Engineering, Red Hat
Rony Efraim, NVIDIA

Video on-demand

Abstract: The modern traffic of the hybrid cloud era relies heavily on flexible, efficient, reliable, and customizable software-defined networks. These are built using a standard set of routines such as packet encryption, encapsulation, control, switching, and routing, implemented in software, that run on commodity servers and consume precious CPU cycles. Encapsulation and switching alone can potentially take up to 40 CPU cores for terminating 100GB of traffic and drive up power consumption levels in servers, affecting energy efficiency of the entire data center.
BlueField™ provides an excellent solution to this complicated problem. 

NVIDIA and Red Hat have been working together to provide an elegant and 100% open source solution using BlueField™ SmartNIC Ethernet network adapter cards for hardware offloading of the software-defined networking tasks. With BlueField we can encrypt, encapsulate, switch, and route packets right on the NIC, effectively dedicating the server's processing capacity to running business applications.

During this talk we will discuss typical use cases and demonstrate the performance advantages of using BlueField's hardware offload capabilities with Red Hat Enterprise Linux and Red Hat OpenShift container platform.

(S32420)​ Accelerating enterprise AI with a hybrid architecture and end-to-end automation

Speaker:
Sriram Raghavan, Vice President, IBM Research AI 

Video on-demand

Abstract: As enterprises look to accelerate AI adoption as a central element of their digital transformation, there’s a strong desire for accelerated delivery of AI-powered intelligent applications to production. To address that a comprehensive approach built on a robust hybrid cloud architecture is needed. 

In this session, using real-life success stories - from industries such as financial services, travel, and manufacturing - we’ll highlight three fundamental elements critical to accelerating AI in the enterprise: 
(i) an open hybrid cloud architecture 
(ii) end-to-end automation across the AI lifecycle
(iii) a robust approach to AI governance

We will also highlight technological innovations NVIDIA, Red Hat and IBM have implemented across the stack to enable AI/ML and other data analytics applications. Finally, we will take a look at the trend of emerging data intensive workloads becoming even more heterogeneous by combining big data processing with machine learning, modeling, and simulation.

(S31353)​ Virtual GPU compute evolved: Ampere GPU supported in vGPU

Speakers:
Erwan Gallen, Product Manager RHEL, Red Hat
Martin Tessun, Principal Product Manager, Red Hat 
Michael Shen, Principal Product Manager, NVIDIA

        

Video on-demand

Abstract: Modern data centers ask for fine-grained and informed resource management and NVIDIA A100 GPU introduces Multi-Instance GPU (MIG) in response. You will learn how A100 GPU works in Red Hat products with newly introduced NVIDIA vGPU features, such as SR-IOV, GPUDirect and MIG. These greatly simplify GPU deployment in efficient and intelligent virtualized data centers. You will also learn the next steps of NVIDIA vGPU software in this session.

(S31790)​ The Enterprise Neurosystem: The Unified Field Of AI For Multinational Corporations

Speaker:
Bill Wright, Head of AI/ML and Intelligent Edge, Global Industries and Accounts, Red Hat

Video on-demand

Abstract: AI models are being deployed in dizzying fashion across the Fortune 500, both developed in-house and supplied by vendors. It's clear that many enterprise clients are deploying these models for specific functions, but without underlying connections. Large-scale cross-correlation of their findings, particularly in real-time scenarios, is completely missing. We've decided to start a community to build that large-scale connective model, which includes America Movil, Verizon Media, Equinix, Seagate, Ericsson AI, and a host of other firms. It will be a single AI instance spanning the enterprise, like a neurosystem — connecting the related models and datasets, drawing together their findings, and conducting analytics and pattern analysis across all company operations. It'll provide the C-suite with a window into every business function in real time; autonomously identify, predict, and adjust for related challenges; and in turn deliver an array of optimal solutions to management.

(S31604)​ Fighting Fraud with One App in Many Ways: GPU-Accelerated End-to-End MLops on Kubernetes 

Speakers: 
Sophie Watson, Principal Data Scientist, Red Hat
Will Benton, NVIDIA

   

Video on-demand 

Abstract: Progress in solving business problems with machine learning is rarely linear: practitioners often have to revisit early decisions in light of later information. Furthermore, using machine learning responsibly and effectively requires truly understanding the interactions between many moving parts. MLops refers to the practices and tools to make machine learning responsible, repeatable, and robust while increasing practitioner velocity.

Learn how MLops discipline can make it easier to experiment with new models and techniques, produce reproducible artifacts, and monitor these in production. We’ll build up a real payments fraud detection application on Kubernetes using RAPIDS, Spark, and stream processing and teach you how to understand the computational and predictive performance of a real accelerated machine learning system on any cloud.

(S31912) ​How NTT and Red Hat have built an Edge offer to deliver new cloud-native AI-Platform Services 

Speakers: 
Hidetsugu Sugiyama, Chief Architect, Red Hat
Richard Gee, Senior Business DEvelopment Director, Red Hat
Joao Kluck Gomes, Telco Edge Global Leader, NVIDIA
Yuuki Hashimoto, New Business Development for IVA MEC Platform, NTT EAST

      

Video on-demand

Abstract: With sharing AI edge use cases from NTT East,  learn about the new digital service edge platform technology and how modern cloud native AI applications can be enabled by user/developer on MEC(Multi Access Edge Computing)  platform. The MEC platform is based on OpenShift  cloud native architecture leveraging GPUs and capabilities like NVIDIA GPU-Operator to deliver new AI applications service across multi-cloud edge environments including Telco edge, Private 5G/Local 5G, Customer’s enterprise edge and etc.

(S31982)​ Delivering high-performance graphics using containers

Speaker:
Dave Montana, Strategic Accounts Director, Red Hat

Video on-demand

Abstract: Discover how a world leading scientific processing company used NVIDIA vGPU and Red Hat Containers to provide a fully featured 3D design, rendering and HPC environment on demand, migrating to and from the cloud, allowing them full flexibility of traditionally, very local machine based  workflows.

We will follow the journey in 20 minutes from inception to today and learn about the trials, successes and outcome of their journey from traditional to transformational.

RAFFLE

Feeling lucky? Enter our raffle during April 12 - 16 for a chance to win a variety of prizes! 

We will be giving out 3 prizes each day of NVIDIA GTC. If you are selected as one of our winners, we will reach out to you via email at the end of each day.

1st place raffle: Oculus Quest 2
2nd place raffle: Apple TV 4K
3rd place raffle:  NVIDIA Nano Developer Kit

*You may only enter once beginning April 12 at 12:01 am ET and entries will conclude on April 16 at 8:00 pm Et. Any entries outside of that time frame will not be considered. 

RED HAT JOBS

Opportunities are open. Visit www.redhat.com/jobs to learn more.