Overview
Red Hat OpenShift is the Kubernetes platform to handle the data flowing through modern organizations in increasing volumes and speeds. The demands of high performance computing (HPC) are no exception. Processing this data at a single location or destination is no longer affordable. This means acting on the data as close to the source as possible: high performance computing is moving to the edge.
Many industries are already using HPC. Oil & gas exploration, complex financial modeling, and DNA mapping and sequencing are just a few modern workstreams that have massive data requirements and rely on HPC to drive breakthrough discoveries.
New to high performance computing?
Who uses OpenShift for HPC?
Several organizations across industry verticals have already pioneered the transformation of their traditional HPC workflows to more modern, container-based intelligent applications using Red Hat OpenShift:
- At the Royal Bank of Canada, Red Hat OpenShift enables better collaboration between data scientists, data engineers, and software developers to speed up deployment of ML and DL models into production environments that use GPU-accelerated high performance infrastructure.
- With Red Hat OpenShift Public Health England improves data and code portability and reusability, data sharing and team collaboration across high-performance computing (HPC) and multicloud operations.
- Lawrence Livermore National Laboratory turned to OpenShift to develop best practices for interfacing HPC schedulers and cloud orchestrators, allowing more traditional HPC jobs to use modern container technologies.
Today, many organizations seek to link HPC and cloud computing footprints with a standardized container toolset, helping to create common technology practices between cloud-native and HPC deployments. These customers demonstrated that it is possible to make massive improvements to traditional HPC workloads with AI/ML-driven applications running on containers and Kubernetes, all powered by a hybrid cloud platform like Red Hat OpenShift. Additionally, by working with modern technology infrastructure and relying on containers, HPC sites can benefit from having a nice and consistent interface into their systems and software with Kubernetes.
These newfound capabilities can create competitive advantages and accelerate discoveries while gaining the flexibility and scale of cloud-native technologies. This, in turn, enables HPC workloads to run at the edge, where the data is being generated or collected, or at the most powerful exascale supercomputers, and anywhere in between.