Log in / Register Account

Virtual event

Red Hat at NVIDIA GTC 2020

Watch recording from October 5, 2020

EVENT OVERVIEW

Red Hat’s participation at NVIDIA GTC 2020

 

To experience Red Hat’s open source solutions that fuel many emerging workloads, including AI/ML/DL, join the online presentations at NVIDIA’s GTC 2020 event. Red Hat’s experts will showcase how scalable software infrastructure from Red Hat can be deployed in a range of scenarios, from virtualized environments in corporate datacenters to massive-scale services on public clouds and all the way to the edge.

Chat with Red Hatters

NVIDIA GTC attendees: The networking lounge will be staffed with Red Hatters throughout the GTC event to answer your questions. Step inside the Red Hat pop-up experience and join the conversation!

RED HAT AND NVIDIA

Red Hat and NVIDIA are seeking new ways to enable the latest technological innovations for our ecosystem. We are collaborating on creating scalable solutions that accelerate a diverse range of workloads, from deep learning training to data analytics. 

We share a vision of IT’s future as one fueled by open source technologies and are working together to enable our customers to run their businesses on any footprint, from bare-metal and virtualized deployments in corporate datacenters to massive-scale services deployed on public clouds to distributed applications at the edge, using familiar infrastructure.

SPEAKING SESSIONS

Connect with Red Hat by joining our speaking sessions at NVIDIA GTC

Locationless data science with a modern, more secure edge (A22336)

   

John Archer, Chief Solution Architect, Red Hat

Many energy firms and manufacturing and industrial organizations have been building data science models on their local machines, datacenter, or public cloud for predictive maintenance, remote monitoring, improving reliability, optimization scenarios, and reducing risk with Health Safety Environment style improvements. In order to deliver these use cases the telemetry, photo, video and/or acoustic data collected requires backhauling the data to not only train the initial model but also to retrain and keep the model healthy and viable.  


In this session, we’ll share strategies and product features of Red Hat Enterprise Linux and Red Hat OpenShift Container Platform. These products support end-to-end data science workloads on edge. They also consider risk management from the vantage point of an energy organization that uses NVIDIA’s and Red Hat’s joint capabilities in any network topology.

Using open source and accelerated AI to successfully modernize the public sector scene (A22304)

 

Chris Sexsmith, Data Science & Edge Practice Lead, Red Hat

In the modern reality of data-driven world, AI and machine learning (ML) becoming increasingly important as the volume, variety, and velocity of data exceeds the cognitive capabilities of human operators. In this presentation, we will discuss how to successfully leverage open source software and enable AI/ML applications for public sector and government agencies. You'll learn how modern computing capabilities can enable high-performance solutions to be deployed anywhere from the core datacenter to the edge, giving AI/ML and other data analytics applications the resources they need to keep pace with the ever-growing demands. We will also share success stories and demonstrate how critical it is for AI/ML applications to be built on a consistent, agile, and open architecture which is flexible enough to support modern workloads.

Machine learning infrastructure visibility dashboard (A22302)

 

Zak Berrie, Machine Learning Solution Specialist, Red Hat and Yochay Ettun, CEO, cnvrg.io

Machine learning infrastructure, composed of accelerated compute (NVIDIA GPUs), CPU based servers, storage, and networking is the most expensive and demanding equipment in the IT landscape. Data scientists, responsible for taking ML models from research to production, share this infrastructure. Many times they seek to optimally collaborate and extract efficiency and high utilization without losing productivity and valuable time.  In this talk cnvrg.io and Red Hat will present an NVIDIA partner solution—a data science platform, which operates with the Red Hat-managed Kubernetes-based service Red Hat OpenShift Dedicated. cnvrg.io is a code-first platform, providing all tools needed for data scientists to take models from research to production. In addition, cnvrg.io has recently released an ML infrastructure dashboard

All-flash scale-out data platforms for AI/ML workloads (A22305)

 

Michael St-Jean, Technical Marketing Manager, Red Hat

As organizations accelerate their adoption of AI, infrastructure demands continue to dominate in areas of data preparation and management, model training, and inference. According to 451 Research, top infrastructure influencers for AI production improvement include networking, compute accelerators in the cloud, memory capacity, faster servers, and more scalable, higher performance storage. In collaboration with Micron, AMD, and Supermicro, Red Hat addresses these requirements to deliver data performance at scale for AI/ML workloads with Red Hat Ceph® Storage and Red Hat OpenShift Container Storage. 

Aligned with a Red Hat reference architecture with Supermicro for AI/ML Acceleration using NVIDIA GPUs, this session will highlight a complete solution for AI/ML workloads running on Red Hat OpenShift. We’ll discuss the work designing, deploying, tuning, and performance testing all-flash reference platforms for OpenShift Container Storage using Red Hat Ceph Storage, NVMe, and Rook.io. Data architects will achieve a better understanding of price-performance options and how to tune for optimal performance.

Deploying Video Analytics in the Cloud - NVIDIA Metropolis on Red Hat OpenShift (A21710)

  

John Senegal, Global Partners Principle Solution Architect, Red Hat and Sujit Biswas, Prinicipal Engineer and Data Scientist, NVIDIA

Learn about the collaboration between NVIDIA and Red Hat and the integration of the NVIDIA Metropolis platform running on NVIDIA EGX and Red Hat OpenShift (Kubernetes) in the public cloud.

Leverage Data Processing at the Edge for Mission-critical Success (A22443)

 

Chris Sexsmith, Data Science and Edge Practice Lead, Red Hat and Jeffrey Winterich, DoD Account Chief Technologist, Hewlett Packard Enterprise 

Join AI/ML experts from Red Hat & HPE as they discuss how their new collaborative solution with NVIDIA enables AI/ML as an effective real-time mission partner and makes AI-driven edge processing a reality.

Core-to-edge AI for Vertical Industries with OpenShift, NVIDIA NGC, and IBM (A22359)

  

Jered Floyd, Technology Strategist, Red Hat Inc. and Akhil Docca, Senior Product Marketing Manager for NGC, NVIDIA and Jerry Liu, IBM

Creating an end-to-end solution that allows businesses to span their operations from the data center all the way to the edge requires an integrated, full-stack implementation that can deliver secure, latency-aware applications. Learn how the combination of GPU-optimized software available from the NVIDIA NGC catalog, Red Hat’s software platforms with enterprise-grade Kubernetes support, and IBM’s vertical industry expertise help bring AI-enabled applications to thousands of autonomous, smart edge servers capable of managing myriad devices.

OPENSHIFT COMMONS GATHERING

Join us on October 5-9th

Where users, partners, customers, contributors and upstream project leads come together to collaborate and work together across the OpenShift Cloud Native ecosystem.

This OpenShift Commons Gathering on AI and Machine Learning is co-located with NVIDIA's GTC virtual event on demand from October 5-9th!

OPENSHIFT COMMONS GATHERING SESSIONS

The Enterprise Neurosystem Initiative: The Connective Intelligence of Enterprise AI

A22354

Bill Wright, Head of AI/ML and Intelligent Edge, Global Industries And Accounts, Red Hat

DevOps vs. MLOps vs. AIOps

A22354

Zak Berrie, Machine Learning Solution Specialist, Red Hat,

Can the introduction of data science, artificial intelligence, and machine learning techniques into the discipline of information technology be as big as earlier transitions like the move from big iron to commodity hardware? Or the implementation of virtualization? Or the move towards DevOps and Agile? In this discussion, Zak Berrie will posit that we are on the cusp of a major change. Everyone in IT, from leadership to developers to operators, should be thinking about how these new tools and techniques will change the way we do our job. What solutions to challenges that we have previously thought unsolvable might now be within our reach?

GPU-Accelerated Machine Learning with OpenShift

A22353

Michael Bennett AI Technologist, Dell Technologies

Diane Feddema, Principal Software Engineer, Red Hat

Using MPI operator to run GPU-accelerated scientific workloads on Red Hat OpenShift with Lustre FS

A22353

David Gray, Performance Engineer, Red Hat,

High Performance Computing (HPC) workloads increasingly rely on the use of containers that make applications easier to manage, preserve their dependencies and add portability across different environments. Red Hat OpenShift Container Platform, is an enterprise-ready Kubernetes-based platform for deploying containerized applications on shared compute resources. An Operator is a method of packaging, deploying and managing a Kubernetes-native application that can make it easier to run complex workloads. In this talk we will demonstrate how a GPU-accelerated scientific applications can be deployed on OpenShift, using Message Passing Interface (MPI) and backed by the Lustre file system for data storage.

Using GPUs for Data Science & Optimization Containers in OpenShift

Cory Latschkowski, Cloud Architect - Data Science Enablement Team Lead, ExxonMobil

A22352

Today GPUs are no longer just used to render 2D & 3D graphics. Data Scientists are finding that in-depth analytics processing can be completed quicker on GPUs over some CPUs. We are addressing these challenges with Kubernetes. We have built an AI platform on Kubernetes (OpenShift) that provides GPUs and other resources to many of our Scientists. In this talk, you can expect to learn about our GPU experience, challenges, use cases and the road ahead.

Applying AIOps to Kubernetes Telemetry Data with Open Data Hub at OpenShift

A22352

Alex Corvin, Associate Software Engineering Manager, ODH

Ivan Necas, Principal Software Engineer, CCX,

By applying artificial intelligence to the vast amounts of data that systems generate, we can revolutionize the software operations field, freeing up engineering resources to innovate. In this session we explore the Open Data Hub, an Open Source project based on Kubeflow that provides tools for running large and distributed AI workloads on OpenShift Container Platform. We’ll share how the OpenShift operations team at Red Hat are applying AIOPs via Open Data Hub to OpenShift Telemetry data in order to automate traditional operations tasks.

Accelerating AI on the Edge

A22351

Nick Barcet, Senior Director Technology Strategy, Red Hat

Kevin Jones, Principal Product Manager, NVIDIA EGX,

This session will explain how GPU acceleration can be used in a manufacturing edge use case. From training to inferencing, machine learning may or may not require acceleration depending on the types and volumes of data being handled. The session will show some concrete examples and provide some rules to correctly pick and size GPUs in deployments.

Data driven insights with SQL Server Big Data Clusters and OpenShift

A22351

Buck Woody, Applied Data Scientist, Microsoft,

This session will cover Big Data Clusters (BDC), a new set of capabilities introduced in SQL Server 2019 to help achieve data-driven business insights from the high-value relational data, and high-volume big data. With BDC, organizations can run containerized Apache Spark and Hadoop Data File System (HDFS) natively as part of SQL Server 2019, in addition to running relational databases, Microsoft Machine Learning capabilities, and Polybase data virtualization. BDC requires Linux containers and Kubernetes, and recently Red Hat OpenShift was added as a commercially supported Kubernetes platform for BDC.

RED HAT POP-UP EXPERIENCE

A taste of enterprise open source at your fingertips. Step inside the Red Hat pop-up experience.

Learn: Watch demos; download e-books, guides, datasheets, and overviews to learn more about hybrid cloud, automation, cloud-native app development, our commitment to open source, and open technologies for every industry.

Play: Test your command line skills, protect the planet from the dangers of space, or help a pod escape from a disappearing digital landscape in our Open Source Arcade filled with games built with open source software. Learn more about the tools the developers used to create them. 

Network: Chat with each other and Red Hatters in our Networking lounge

Get swag: Request Red Hat swag to be sent directly to your door

NVIDIA GTC attendees: The networking lounge will be staffed with Red Hatters throughout the GTC event to answer your questions. Step inside the Red Hat pop-up experience and join the conversation!

CONTACT US

Do you have questions about Red Hat’s featured solutions? Do you want to be connected to a Red Hat subject matter expert? Email us below.

GTC20@redhat.com