Select a language
To experience Red Hat’s open source solutions that fuel many emerging workloads, including AI/ML/DL, join the online presentations at NVIDIA’s GTC 2020 event. Red Hat’s experts will showcase how scalable software infrastructure from Red Hat can be deployed in a range of scenarios, from virtualized environments in corporate datacenters to massive-scale services on public clouds and all the way to the edge.
Red Hat and NVIDIA are seeking new ways to enable the latest technological innovations for our ecosystem. We are collaborating on creating scalable solutions that accelerate a diverse range of workloads, from deep learning training to data analytics.
We share a vision of IT’s future as one fueled by open source technologies and are working together to enable our customers to run their businesses on any footprint, from bare-metal and virtualized deployments in corporate datacenters to massive-scale services deployed on public clouds to distributed applications at the edge, using familiar infrastructure.
John Archer, Chief Solution Architect, Red Hat
Many energy firms and manufacturing and industrial organizations have been building data science models on their local machines, datacenter, or public cloud for predictive maintenance, remote monitoring, improving reliability, optimization scenarios, and reducing risk with Health Safety Environment style improvements. In order to deliver these use cases the telemetry, photo, video and/or acoustic data collected requires backhauling the data to not only train the initial model but also to retrain and keep the model healthy and viable.
In this session, we’ll share strategies and product features of Red Hat Enterprise Linux and Red Hat OpenShift Container Platform. These products support end-to-end data science workloads on edge. They also consider risk management from the vantage point of an energy organization that uses NVIDIA’s and Red Hat’s joint capabilities in any network topology.
Chris Sexsmith, Data Science & Edge Practice Lead, Red Hat
In the modern reality of data-driven world, AI and machine learning (ML) becoming increasingly important as the volume, variety, and velocity of data exceeds the cognitive capabilities of human operators. In this presentation, we will discuss how to successfully leverage open source software and enable AI/ML applications for public sector and government agencies. You'll learn how modern computing capabilities can enable high-performance solutions to be deployed anywhere from the core datacenter to the edge, giving AI/ML and other data analytics applications the resources they need to keep pace with the ever-growing demands. We will also share success stories and demonstrate how critical it is for AI/ML applications to be built on a consistent, agile, and open architecture which is flexible enough to support modern workloads.
Zak Berrie, Machine Learning Solution Specialist, Red Hat and Yochay Ettun, CEO, cnvrg.io
Machine learning infrastructure, composed of accelerated compute (NVIDIA GPUs), CPU based servers, storage, and networking is the most expensive and demanding equipment in the IT landscape. Data scientists, responsible for taking ML models from research to production, share this infrastructure. Many times they seek to optimally collaborate and extract efficiency and high utilization without losing productivity and valuable time. In this talk cnvrg.io and Red Hat will present an NVIDIA partner solution—a data science platform, which operates with the Red Hat-managed Kubernetes-based service Red Hat OpenShift Dedicated. cnvrg.io is a code-first platform, providing all tools needed for data scientists to take models from research to production. In addition, cnvrg.io has recently released an ML infrastructure dashboard
Michael St-Jean, Technical Marketing Manager, Red Hat
As organizations accelerate their adoption of AI, infrastructure demands continue to dominate in areas of data preparation and management, model training, and inference. According to 451 Research, top infrastructure influencers for AI production improvement include networking, compute accelerators in the cloud, memory capacity, faster servers, and more scalable, higher performance storage. In collaboration with Micron, AMD, and Supermicro, Red Hat addresses these requirements to deliver data performance at scale for AI/ML workloads with Red Hat Ceph® Storage and Red Hat OpenShift Container Storage.
Aligned with a Red Hat reference architecture with Supermicro for AI/ML Acceleration using NVIDIA GPUs, this session will highlight a complete solution for AI/ML workloads running on Red Hat OpenShift. We’ll discuss the work designing, deploying, tuning, and performance testing all-flash reference platforms for OpenShift Container Storage using Red Hat Ceph Storage, NVMe, and Rook.io. Data architects will achieve a better understanding of price-performance options and how to tune for optimal performance.
John Senegal, Global Partners Principle Solution Architect, Red Hat and Sujit Biswas, Prinicipal Engineer and Data Scientist, NVIDIA
Learn about the collaboration between NVIDIA and Red Hat and the integration of the NVIDIA Metropolis platform running on NVIDIA EGX and Red Hat OpenShift (Kubernetes) in the public cloud.
Chris Sexsmith, Data Science and Edge Practice Lead, Red Hat and Jeffrey Winterich, DoD Account Chief Technologist, Hewlett Packard Enterprise
Join AI/ML experts from Red Hat & HPE as they discuss how their new collaborative solution with NVIDIA enables AI/ML as an effective real-time mission partner and makes AI-driven edge processing a reality.
Jered Floyd, Technology Strategist, Red Hat Inc. and Akhil Docca, Senior Product Marketing Manager for NGC, NVIDIA and Jerry Liu, IBM
Creating an end-to-end solution that allows businesses to span their operations from the data center all the way to the edge requires an integrated, full-stack implementation that can deliver secure, latency-aware applications. Learn how the combination of GPU-optimized software available from the NVIDIA NGC catalog, Red Hat’s software platforms with enterprise-grade Kubernetes support, and IBM’s vertical industry expertise help bring AI-enabled applications to thousands of autonomous, smart edge servers capable of managing myriad devices.
This OpenShift Commons Gathering on AI and Machine Learning is co-located with NVIDIA's GTC virtual event on demand from October 5-9th!
A taste of enterprise open source at your fingertips. Step inside the Red Hat pop-up experience.
Learn: Watch demos; download e-books, guides, datasheets, and overviews to learn more about hybrid cloud, automation, cloud-native app development, our commitment to open source, and open technologies for every industry.
Play: Test your command line skills, protect the planet from the dangers of space, or help a pod escape from a disappearing digital landscape in our Open Source Arcade filled with games built with open source software. Learn more about the tools the developers used to create them.
Network: Chat with each other and Red Hatters in our Networking lounge
Get swag: Request Red Hat swag to be sent directly to your door
NVIDIA GTC attendees: The networking lounge will be staffed with Red Hatters throughout the GTC event to answer your questions. Step inside the Red Hat pop-up experience and join the conversation!