Overview
Red Hat OpenShift is the hybrid cloud application platform to handle the data flowing through modern organizations in increasing volumes and speeds. The demands of high performance computing (HPC) are no exception. Processing this data at a single location or destination is no longer affordable. This means acting on the data as close to the source as possible: high performance computing is moving to the edge.
Many industries are already using HPC. Oil & gas exploration, complex financial modeling, and DNA mapping and sequencing are just a few modern workstreams that have massive data requirements and rely on HPC to drive breakthrough discoveries.
New to high performance computing?
Scale HPC
Because of HPC’s scale, using it for advanced simulations and computations can help create outcomes and options that would be challenging to achieve with traditional infrastructure. HPC’s massive scale has largely gone unchanged over the past 20 years but new applications demand new architectures. For example, modern applications often use artificial intelligence (AI) that needs high performance data analytics (HPDA) and requires staging of massive data samples for easier consumption and inclusion of external frameworks. These requirements are much more easily achieved when an application and its dependencies are packaged in containers. Existing HPC environments weren’t designed with containers in mind so we must reexamine these architectures and find ways to bring them closer to today’s flexible cloud-native environments.
Red Hat is a leader in driving cloud-native innovation across hybrid multicloud environments. We are bringing this experience to massive-scale HPC deployments. Red Hat understands the collective needs and changing demands of the HPC landscape and wants to make Linux containers, Kubernetes and other building blocks of cloud-native computing more readily accessible to supercomputing sites.
Standards are critical to expanding the reach of HPC, from the edge to exascale. Common, accepted standards and practices, like those defined by the Open Container Initiative (OCI)—from container security to scaling containerized workloads—are necessary for HPC to make the most of container technologies. To help containers meet the unique needs of the exascale computing world, Red Hat is working to enhance Podman and the associated container tooling to meet the intensive needs of containerized workloads on HPC systems.
A major challenge comes when the number of containers starts to increase exponentially. Large scale simulations, and other demanding workloads, require a robust container orchestration platform. Kubernetes is the de facto standard in orchestrating containerized workloads across hybrid multicloud environments. Red Hat is both a leading contributor to the Kubernetes community project and offers the industry’s leading enterprise Kubernetes platform, Red Hat OpenShift.
Red Hat OpenShift is emerging as a backbone for running containers at massive scale. With Red Hat OpenShift already established across the datacenter, public clouds and at the edge, the standard components and practices of the platform can also aid HPC environments. Red Hat is exploring the deployment scenarios of Kubernetes-based infrastructure at extreme scale, providing easier, well-defined mechanisms for delivering containerized workloads to HPC users.
The transition from traditional HPC architecture and its massively parallel workloads to AI-enabled applications running in containers is not quick or easy, but it does mark a step towards reducing the complexity, cost, and customizations needed to run traditional HPC infrastructure. The transition also presents a chance to bring in modern application development techniques, increase portability and the ability to more rapidly add new capabilities.
Who uses OpenShift for HPC?
Several organizations across industry verticals have already pioneered the transformation of their traditional HPC workflows to more modern, container-based intelligent applications using Red Hat OpenShift:
- At the Royal Bank of Canada, Red Hat OpenShift enables better collaboration between data scientists, data engineers, and software developers to speed up deployment of ML and DL models into production environments that use GPU-accelerated high performance infrastructure.
- With Red Hat OpenShift Public Health England improves data and code portability and reusability, data sharing and team collaboration across high-performance computing (HPC) and multicloud operations.
- Lawrence Livermore National Laboratory turned to OpenShift to develop best practices for interfacing HPC schedulers and cloud orchestrators, allowing more traditional HPC jobs to use modern container technologies.
Today, many organizations seek to link HPC and cloud computing footprints with a standardized container toolset, helping to create common technology practices between cloud-native and HPC deployments. These customers demonstrated that it is possible to make massive improvements to traditional HPC workloads with AI/ML-driven applications powered by a hybrid cloud platform like Red Hat OpenShift. Additionally, by working with modern technology infrastructure and relying on containers, HPC sites can benefit from consistency, speed, and efficiency.
These newfound capabilities can create competitive advantages and accelerate discoveries while gaining the flexibility and scale of cloud-native technologies. This, in turn, enables HPC workloads to run at the edge, where the data is being generated or collected, or at the most powerful exascale supercomputers, and anywhere in between.