High performance computing with Red Hat
Organizations are increasingly using high performance computing (HPC) to solve their most pressing problems with data-driven decisions. The desire to use artificial intelligence (AI) at scale places new demands on existing HPC architectures. These new workloads require the capabilities of HPC to expand further, shifting away from traditional bare metal deployments toward container-based, Kubernetes orchestrated hybrid cloud platforms.
High performance computing (HPC) refers to processing complex calculations at high speeds across multiple servers in parallel. Those groups of servers are known as clusters and are composed of hundreds or even thousands of compute servers that have been connected through a network.
Linux® is the dominant operating system for high performance computing, according to the TOP500 list that keeps track of world’s most powerful computer systems. All TOP500 supercomputers run Linux and many of the top supercomputers run Red Hat® Enterprise Linux.
Red Hat Enterprise Linux is the most popular operating system across all HPC sites, including some of the top supercomputers in the world.
Researchers in a wide variety of fields, including fundamental science research, weather and climate research, and defense research, use high performance computing to evaluate and draw conclusions from large amounts of data. HPC clusters aid in modeling and simulation as well as advanced analytics.
Historically located in government and university research facilities, high performance computing is expandingmoving away from the traditional he on-premise-onlydata center architectures and toward the hybrid cloud.
High performance computing provides advantages to industrial and enterprise users in a variety of industries, including:
Companies are using high performance computing clusters to support technical computing efforts in design and predictive modeling in science and finance.
An OS that delivers a consistent, flexible foundation built to run high performance workloads. Red Hat Enterprise Linux runs on the top 3 supercomputers in the world.
An enterprise container orchestration platform that extends Kubernetes capabilities and provides consistent operations and application life cycle management at scale using flexible topology options to support low-latency workloads anywhere.
An open, massively scalable, simplified storage solution for modern data pipelines.
- Red Hat Joins Forces with U.S. Department of Energy Laboratories to Bridge the Gap Between High Performance Computing and Cloud Environments
- Red Hat OpenShift extends High Performance Computing (HPC) infrastructure from edge to exascale
- Red Hat Powers the Future of Supercomputing with Red Hat Enterprise Linux
- Build a dependable foundation for HPC
- Expanding Podman capabilities to deploy SIF-formatted containers
- Podman for running containerized HPC apps on exascale supercomputers
- Performance capabilities of OpenShift for scientific HPC workloads
- Guide for running Specfem scientific HPC workload on OpenShift
Explore what is possible with HPC and Red Hat