Log in / Register Account

Virtual event

Supercomputing 2020

Watch recording from November 17, 2020

EVENT OVERVIEW

Red Hat’s participation at Supercomputing 2020

Attending Supercomputing 2020 this November? Connect with Red Hat to learn how our products can deliver a foundation to meet even the most demanding computing environments.

Supercomputing is no longer the domain of custom-built hardware and software, and Red Hat is leading the change. From the world’s leading enterprise Linux platform tailored for HPC workloads to massively scalable, open hybrid cloud infrastructure powered by industry leading OpenShift kubernetes platform, OpenStack IaaS, and Ceph storage along with the Ansible based management and automation technologies, these technologies run seamlessly on a variety of architectures.  They are underlying leading supercomputers and playing an important part in driving HPC into new markets and use cases, including AI, enterprise computing, quantum computing and cloud computing.

Talk with our experts and learn how our product portfolio can allow you to run a variety of workloads, at many different scales and increase the pace of innovation for your organization.

SPEAKING SESSIONS

Kevin Jones

Principal Product Manager, Nvidia

NVIDIA GPU and Network Operator News Flash

Tuesday, Nov. 17, 2020 @ 11:30 AM EST 

 

In this session, hear from Kevin Jones, Technical Product Manager for NVIDIA EGX on the latest updates for the NVIDIA GPU Operator and Network Operator. Both of these operators allow you to optimize and utilize both GPUs and Mellanox smart nics in a very simple, automated fashion on top of Kubernetes and OpenShift. This update will include roadmap on vGPU capability, the newest A100 GPU, Mellanox ConnectX6 smart nics and others.

 

Ryan Kraus

Staff Data Science Solutions Architect, Red Hat

Pushing Data Science to the Edge

On demand

 

There has been a long debate over the best system architectures for supporting new machine learning workflows. The incumbent distributed batch style HPC systems are appealing as, oftentimes, scientists already have access to these resources. However, Kubernetes clusters have been grabbing mindshare with data centric compute workloads with projects like KubeFlow and Open Data Hub. There are many reasons for this, but chief among them is the necessity for data scientists to push their models into a production application. To best illustrate the challenges of running code in production, we will analyze one of the more operationally challenging environments, the edge. Red Hat, HPE, and NVIDIA have partnered together to create KubeFrame that solves these production issues for data scientists and allows scientists to stay focused on the science.

 

David Gray and Kevin Pouget

OpenShift Performance Engineering Team, Red Hat

Deploying scientific workloads on Red Hat OpenShift with the MPI Operator

Wednesday, Nov. 18, 2020 @ 11:45 AM EST

 

High Performance Computing (HPC) workloads increasingly rely on the use of containers that make applications easier to manage, preserve their dependencies and add portability across different environments.
Red Hat OpenShift Container Platform is an enterprise-ready Kubernetes-based platform for deploying containerized applications on shared compute resources.
In this talk we will show how to effectively deploy scientific applications, GROMACS and SPECFEM3D Globe, on OpenShift using the MPI Operator from the Kubeflow project across two different distributed shared filesystems, Lustre and CephFS.

 

CJ Newburn

Principal architect, NVIDIA

Magnum IO: IO Subsystem for the Modern Data Center

On Demand

 

This talk describes NVIDIA"s Magnum IO, the IO subsystem of the modern accelerated data center. The symmetry between the architectures of Magnum IO and CUDA is examined, the major HPC-related features are detailed, and performance highlights are offered. We invite you to join us in harnessing current features and developing future technologies, including the BlueField DPU line.

 

Pete Brey

Senior Principal Product Marketing Manager, Red Hat

Achieve massive scalability for data-intensive Kubernetes projects

On demand

 

With the increasing adoption of Kubernetes orchestration, applications and microservices are scaling massively while also creating, ingesting, and consuming massive amounts of data. Solutions must be able to scale simply and rapidly to support truly enormous amounts of data with predictable performance, and they must be able to keep vital applications and services running despite both planned and unforeseen failure events.Traditional storage is not built to handle the demands of 10’s of thousands of applications and microservices running simultaneously, nor support new cloud-native pipelines. And while many new "container storage" solutions claim to deliver needed data services, many lack the maturity to deliver proven performance at scale for applications in production. Red Hat Ceph Storage solves this problem.

 

Guillaume Moutier

Senior Principal Technical Evangelist , Red Hat

Implementing an automated X-Ray image data pipeline, the cloud-native way

Tuesday, Nov. 17, 2020 @ 2:45 PM EST

 

Data is becoming the bread and butter of many organizations, or at least something most couldn't live without. Many applications or solutions, from storage to platforms and middleware, can help support the whole lifecycle of the data throughout its acquisition, transformation, storage, and consumption. In this session, you’ll see how you can create a data pipeline that is able to automatically ingest X-Ray chest images, classify them according to the pneumonia risk using AI inferencing, anonymize them, retrain the model using machine learning, and finally automatically redeploy this new model. All of this using various open-source tools like Rook-Ceph, KNative, Kafka, & Grafana.

 

Erik Jacobs

Senior Principal Technical Marketing Manager, Red Hat

AI/ML with OpenShift

On demand

 

Data scientist workflows look just like DevOps workflows. Since containers and Red Hat OpenShift provide the foundation for DevOps, Red Hat OpenShift is an excellent platform for data scientists and data engineers for fast tracking their AI/ML and HPC projects from pilot to production. In this session you'll see how Red Hat's OpenShift, the Kubernetes Platform for Hybrid Cloud, can provide convenience to data scientists for both getting started and operationalizing AI/ML projects at scale, anywhere i.e. expanding DevOps for MLOps.

 

Abhinav Joshi

Senior Manager, Product Marketing, Red Hat

Fast Track AI With Hybrid Cloud Powered by Kubernetes

On demand

 

Business leaders desire data driven insights to help improve customer experience. Data engineers, data scientists, and software developers desire a self-service, cloud-like experience to access tools/frameworks, data, and compute resources anywhere to rapidly build, scale, and share results of their projects to accelerate delivery of AI-powered intelligent applications into production. This keynote will provide a brief overview of the AI/ML use cases, required capabilities, and execution challenges. Next we will discuss the value of Hybrid Cloud powered by containers, Kubernetes, and DevOps to help fast track AI/ML projects from pilot to production, and accelerate delivery of intelligent applications. Finally, the session will share real world success stories from various industries globally.

 

Karl W. Schulz

Research associate professor, The University of Texas at Austin

OpenHPC 2.0: The latest community-driven technology stack for HPC

On demand

 

Over the last several years, OpenHPC has emerged as a community-driven stack providing a variety of common, pre-built ingredients to deploy and manage an HPC Linux cluster including provisioning tools, resource management, I/O clients, runtimes, development tools, containers, and a variety of scientific libraries. The 2.0 release adds support for containers, support for aarch64 Arm compiler and several other updates. Get the details during this informative talk.

 

Andrew Younge

Lead investigator, Supercontainers project, Sandia National Laboratories

Modern container runtimes for exascale computing era

Thursday, Nov. 19, 2020 @ 12:45 PM EST

 

Supercomputing Containers project (aka SuperContainers) is a part of Exascale Computing Project at the Department of Energy (DOE). It was launched to evaluate several container technologies that were in use at different DOE Labs in anticiaption of the arrival of exascale systems. The conclusion of that evaluation was that a more robust production-quality container solution is required that is based on open source software development model, supports key standards, and can run across multiple hardware architectures. In this talk we will share which container runtimes are best suited for exacale supercomputers.

 

Carlos Eduardo Arango Gutierrez

Software engineer, Red Hat

Using Containers to Accelerate HPC

Tuesday, Nov. 10, 2020 @ 2:30 PM EST

 

Within just the past few years, the use of containers has revolutionized the way in which industries and enterprises have developed and deployed computational software and distributed systems. The containerization model has gained traction within the HPC community as well with the promise of improved reliability, reproducibility, portability and levels of customization that were previously not possible on supercomputers. This adoption has been enabled by a number of HPC container runtimes that have emerged including Singularity, Shifter, Enroot, Charliecloud and others.

This tutorial will provide more advanced information on how to run MPI-based and GPU-enabled HPC applications, how to optimize I/O intensive workflows, and how to setup GUI-enabled interactive sessions. Cutting-edge examples will include machine learning and bioinformatics. Users will leave with a solid foundational understanding of how to utilize containers with HPC resources through Shifter and Singularity, as well as an in-depth knowledge to deploy custom containers.

 

archspec: A Library for Detecting, Labeling, and Reasoning About Microarchitectures

Thursday, Nov. 12, 2020 @ 3:30 PM EST

 

Optimizing scientific code for specific microarchitectures is critical for performance, as each new processor generation supports new, specialized vector instructions. There is a lack of support for this in package managers and container ecosystems, however, and users often settle for generic, less optimized binaries because they run on a wide range of systems and are easy to install. This comes at a considerable cost in performance. In this paper we introduce archspec, a library for reasoning about processor microarchitectures. We present the design and capabilities of archspec, which include detecting and labelling of microarchitectures, reasoning about microarchitectures and comparing them for compatibility and determining the compiler flags that should be used to compile software for a specific microarchitecture. We demonstrate the benefits that archspec brings by discussing several use cases including package management, optimized software stacks and multi-architecture container orchestration.

 

LIVE CHAT

Red Hat is available for live chat during event exhibit hours.

Tuesday, Nov 17, 10:00 AM - 5:00 PM EST
Wednesday, Nov 18, 10:00 AM - 5:00 PM EST
Thursday, Nov 19, 10:00 AM - 3:00 PM EST

RED HAT POP- UP EXPERIENCE

A taste of enterprise open source at your fingertips. Step inside the Red Hat pop-up experience.

  • Learn: Watch demos; download e-books, guides, datasheets, and overviews to learn more about hybrid cloud, automation, cloud-native app development, our commitment to open source, and open technologies for every industry.
  • Play: Test your command line skills, protect the planet from the dangers of space, or help a pod escape from a disappearing digital landscape in our Open Source Arcade filled with games built with open source software. Learn more about the tools the developers used to create them. 
  • Get swag: Request Red Hat swag to be sent directly to your door.

FEATURED PRODUCTS

Join the conversation