Massive amounts of data are racing towards us at an unheard of velocity. But processing this data quickly, at a centralized location, is no longer possible for most organizations. How might we better act on this data to preserve its relevance? The answer lies in acting on the data as close to the source as possible. This means making data-driven decisions or getting answers to the most pressing questions in real-time, across all of your computing environments - from the edge to the exascale.
If you’re processing massive amounts of data at scale with multiple tasks running simultaneously, you are likely already using high-performance computing (HPC). Oil & gas exploration, complex financial modeling and DNA mapping and sequencing are just a few modern workstreams that have massive data requirements and rely on HPC to drive breakthrough discoveries.
With HPC, running advanced and computational problems and simulations in parallel on highly optimized hardware and super fast networks can help deliver answers and create outcomes more quickly. Because of HPC’s sheer scale, it would be challenging for the traditional datacenter infrastructure to deliver similar results. And also because its massive scale “just works,” HPC has largely gone unchanged over the past 20 years. Today, however, we are seeing HPC undergo a transformation as it faces increased demand from the applications running on it.
For example, modern applications often use artificial intelligence (AI) that needs high performance data analytics (HPDA) and requires staging of massive data samples for easier consumption and inclusion of external frameworks. These requirements are much more easily achieved when an application and its dependencies are packaged in containers. Existing HPC workflows, however, aren’t exactly container-friendly, which necessitates examining these architectures and finding ways to bring them closer to today’s flexible cloud-native environments.
Red Hat is a leader in driving cloud-native innovation across hybrid multicloud environments — we are also taking that knowledge to massive-scale HPC deployments. We understand the collective needs and changing demands of the transforming HPC landscape and want to make Linux containers, Kubernetes and other building blocks of cloud-native computing more readily accessible to supercomputing sites.
Standards are a critical component in enabling computing innovation, especially when technologies must span from the edge to exascale. From container security to scaling containerized workloads, common, accepted standards and practices, like those defined by the Open Container Initiative (OCI), are necessary for the HPC world to get the most from container technologies. To help containers meet the unique needs of the exascale computing world, Red Hat is working to enhance Podman and the associated container tooling, to meet the intensive needs of containerized workloads on HPC systems.
But the real challenge comes when the number of containers starts to increase exponentially. A robust container orchestration platform is required to help HPC sites run large scale simulations and other demanding workloads. Kubernetes is the de facto standard when it comes to orchestrating containerized workloads across hybrid multicloud environments, with Red Hat being both a leading contributor to the Kubernetes community project and offering the industry’s leading enterprise Kubernetes platform, Red Hat OpenShift.
We would like to see Kubernetes more widely adopted in HPC as a backbone for running containers at massive scale. With Red Hat OpenShift already established across the datacenter, public clouds and even at the edge, the standard components and practices of the platform also show promise for HPC environments. This is where Red Hat is focusing next, targeting deployment scenarios of Kubernetes-based infrastructure at extreme scale and providing well-defined and easier to use mechanisms for delivering containerized workloads to HPC users.
This transition from traditional HPC architecture and its massively parallel workloads to AI-enabled applications running in containers is not a quick and easy one, but does mark a step towards reducing the complexity, cost, and customizations needed to run traditional HPC infrastructure. The transition also presents a chance to bring in modern application development techniques, increase portability and the ability to more rapidly add new capabilities.
Several organizations across industry verticals have already pioneered the transformation of their traditional HPC workflows to more modern, container-based intelligent applications on Red Hat OpenShift:
-
At the Royal Bank of Canada, OpenShift enables better collaboration between data scientists, data engineers, and software developers to speed up deployment of ML and DL models into production environments that use GPU-accelerated high performance infrastructure
-
With Red Hat OpenShift Public Health England improves data and code portability and reusability, data sharing and team collaboration across high-performance computing (HPC) and multicloud operations.
-
Lawrence Livermore National Laboratory turned to OpenShift to develop best practices for interfacing HPC schedulers and cloud orchestrators, allowing more traditional HPC jobs to use modern container technologies
Today, many organizations seek to link HPC and cloud computing footprints with a standardized container toolset, helping to create common technology practices between cloud-native and HPC deployments. These customers demonstrated that it is possible to make massive improvements to traditional HPC workloads with AI/ML-driven applications running on containers and Kubernetes, all powered by a hybrid cloud platform like Red Hat OpenShift. Additionally, by working with modern technology infrastructure and relying on containers, HPC sites can benefit from having a consistent interface into their systems and software with Kubernetes.
These newfound capabilities can help create competitive advantages and accelerate discoveries while gaining the flexibility and scale of cloud-native technologies. This, in turn, enables HPC workloads to run at the edge, where the data is being generated or collected, or at the most powerful exascale supercomputers, and anywhere in between.
저자 소개
Yan Fisher is a Global evangelist at Red Hat where he extends his expertise in enterprise computing to emerging areas that Red Hat is exploring.
Fisher has a deep background in systems design and architecture. He has spent the past 20 years of his career working in the computer and telecommunication industries where he tackled as diverse areas as sales and operations to systems performance and benchmarking.
Having an eye for innovative approaches, Fisher is closely tracking partners' emerging technology strategies as well as customer perspectives on several nascent topics such as performance-sensitive workloads and accelerators, hardware innovation and alternative architectures, and, exascale and edge computing.
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.