Subscribe to our blog

Twice a year the high-performance computing (HPC) community anxiously awaits the announcement of the latest edition of the Top500 list, cataloging the most powerful computers on the planet. The excitement of a supercomputer breaking the coveted exascale barrier and moving into the top position typically overshadows the question of which country will hold the record. As it turned out, the top 10 systems on the November 2019 Top500 list are unchanged from the previous revision with Summit and Sierra still holding #1 and #2 positions, respectively. Despite the natural uncertainty around the composition of the Top500 list, there is little doubt about software technologies that are helping to reshape the HPC landscape. Starting at the International Supercomputing conference earlier this year, one of the technologies leading this charge is containerization, lending further credence to how traditional enterprise technologies are influencing the next generation of supercomputing applications.

Containers are borne out of Linux, the operating system underpinning Top500 systems. Because of that, the adoption of container technologies has gained momentum and many supercomputing sites already have some portion of their workflows containerized. As more supercomputers are being used to run artificial intelligence (AI) and machine learning (ML) applications to solve complex problems in science-- including disciplines like astrophysics, materials science, systems biology, weather modeling and cancer research, the focus of the research is transitioning from using purely computational methods to AI-accelerated approaches. This often requires the repackaging of applications and restaging the data for easier consumption, where containerized deployments are becoming more and more important.

So what happens when you combine thousands of servers capable of running tens of thousands of these containers in an environment where multiple users need to run multiple jobs in parallel? You need an orchestration platform to run these containerized applications in a way that makes sense and you need intelligent data storage that can help you marshall the data in just the right way. These problems are being addressed, respectively, by Kubernetes and distributed software-defined storage, technologies that have already been adopted by enterprises and now converge into HPC.

For the past two decades, Red Hat Enterprise Linux (RHEL) has served as the foundation for building software stacks for many supercomputers. We are looking to continue this trend with the next generation of systems using our Red Hat OpenShift Kubernetes container platform augmented by Ceph-based container-native storage. 

Open source is more than software - it’s about community collaboration based on open standards and a commitment to continuous innovation. This isn’t an achievable solo feat, which is why Red Hat has built a robust ecosystem of partners to help deliver tangible benefits to our joint customers. As an example, we are excited to see that NVIDIA GPUs are now working seamlessly with Arm-based servers, through collaboration with NVIDIA we are making sure that our customers at leading supercomputing sites, like DOE national laboratories, have access to the latest development tools for accelerated computing that run on top of RHEL.

You can learn a lot more about Red Hat’s involvement in HPC at Supercomputing 2019 (SC19) conference in Denver, Colorado from November 18th through November 21st. If you are attending SC19, we encourage you to stop by our booth #1635 and see how Red Hat is accelerating the ongoing convergence of HPC and enterprise computing. At our booth, you will be able to discover inventive solutions and demonstrations, gain a deeper understanding into the underlying technologies and get a first-hand access to our experts who can speak with you about:

Proven, trusted infrastructure

Red Hat Enterprise Linux provides the foundation for several top supercomputers, available across multiple hardware architectures, like IBM Power and 64-bit Arm, and enables specialized computational devices, like GPUs and network accelerators. It is also at the core of Red Hat OpenStack Platform and Red Hat OpenShift, both of which are part of many HPC environments. Learn about RHEL 8, that includes container management tools and support for universal base image (UBI). 

Storage for the hybrid cloud

Scientists can quickly create projects using containers, but they are often relying on large repositories of object data that has to be well integrated, portable, and persistent. Learn how Red Hat Ceph Storage, the persistent storage layer powering OpenShift Container Storage, delivers file, block, and object storage protocols across the hybrid cloud.

Emerging open technologies

See how Red Hat is leveraging different hardware architectures, enables various acceleration technologies and network interconnects, and helps to drive open innovation and standardization in high-performance computing via collaborative community efforts like the OpenHPC project.

The Kubernetes platform for big ideas

HPC workloads no longer need to run on bare-metal hardware, and can often be deployed in private, public, or hybrid cloud using containers that are managed and orchestrated by Kubernetes. Learn how data scientists could run their HPC/AI/ML workloads with much needed scalability, flexibility, and portability using Kubernetes-based Red Hat OpenShift Container Platform.

You could see exciting demonstrations of our technologies being used in HPC solutions:

We are also encouraging our customers and partners to attend one of the many mini-theater sessions we will be hosting in Red Hat booth. Visit this page for the latest updates to mini-theater schedule, access to presentations and additional resources, as well as to see the list of auxiliary activities happening during the conference. We hope to see you in Denver!


Über den Autor

Yan Fisher is a Global evangelist at Red Hat where he extends his expertise in enterprise computing to emerging areas that Red Hat is exploring. 

Fisher has a deep background in systems design and architecture. He has spent the past 20 years of his career working in the computer and telecommunication industries where he tackled as diverse areas as sales and operations to systems performance and benchmarking. 

Having an eye for innovative approaches, Fisher is closely tracking partners' emerging technology strategies as well as customer perspectives on several nascent topics such as performance-sensitive workloads and accelerators, hardware innovation and alternative architectures, and, exascale and edge computing.  

Read full bio

Nach Thema durchsuchen

automation icon

Automatisierung

Erfahren Sie das Neueste von der Automatisierungsplattform, die Technologien, Teams und Umgebungen verbindet

AI icon

Künstliche Intelligenz

Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen

cloud services icon

Cloud Services

Mehr erfahren über Managed Cloud Services

security icon

Sicherheit

Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren

edge icon

Edge Computing

Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen

Infrastructure icon

Infrastruktur

Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen

application development icon

Anwendungen

Entdecken Sie unsere Lösungen für komplexe Anwendungsherausforderungen

Original series icon

Original Shows

Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten