Kueue is a community-driven project with the goal to develop a resource management system for Kubernetes that efficiently manages batch workloads. It enables sharing a Kubernetes cluster among teams and users in a fair, cost-effective, and efficient way.

This blog post delves into what Kueue is, what its main use cases are, and introduces the Red Hat build of Kueue, a new operator from the Red Hat OpenShift team. 

What is Kueue?

A diagram showing the major components that make up Kueue: The batch workload that contains the jobs, the Workload that defines the jobs to be processed and their requirements, the LocalQueue that you submit the jobs to and the ClusterQueue which defines pools of resources and the availability/limits

Kueue operates by acting as a central controller that queues jobs and decides when and where they should run. It is designed to manage batch jobs effectively by providing advanced queuing, quota management, and resource sharing capabilities. Kueue helps in optimizing cluster utilization and ensuring that resources are allocated in a fair and efficient manner.

Kueue is particularly useful in the AI space but there are other use cases.

AI use cases for Kueue

You can't examine Kueue use cases without considering AI. While AI seems to insert itself into every topic and conversation, Kueue really has a highly relevant part to play. Kueue is key in managing and optimizing AI workloads within Kubernetes environments. To put it another way, if you are doing AI related work then you should be using Kueue. Here are some of the main AI use cases:

  • Job scheduling and prioritization: Kueue allows for the efficient scheduling and prioritization of AI training and inference jobs. This ensures that critical tasks are completed first and resources are allocated based on priority.
  • Resource management: AI workloads often require substantial computational resources, including GPUs and specialized hardware. Kueue manages these resources, ensuring that they are available when needed and are effectively utilized.
  • Quota management: For organizations with multiple teams running AI workloads, Kueue helps manage resource quotas, preventing any single team from monopolizing resources and ensuring fair access for everyone.
  • Cost optimization: By efficiently managing and scheduling jobs, Kueue helps reduce idle time and optimize resource utilization, leading to significant cost savings.

Other use cases for Kueue

Of course it doesn't have to be all about AI. There are many other use cases that benefit from having Kueue in the stack. So while Kueue is great for AI workloads, there is life beyond AI, and Kueue is here to help.

  • Batch processing: Kueue can efficiently manage various batch processing jobs, such as data processing, simulations, and ETL (extract, transform, load) tasks.
  • High-performance computing (HPC): For HPC applications that require large amounts of computational resources, Kueue can manage job scheduling and resource allocation to optimize performance.
  • CI/CD pipelines: Kueue can be integrated into CI/CD pipelines to manage the execution of build, test, and deployment jobs, ensuring efficient resource utilization.
  • Financial modeling: In finance, complex models and simulations often require significant computational resources. Kueue can manage these jobs efficiently, ensuring timely completion and optimal resource use.

Introducing Red Hat Build of Kueue for OpenShift

Red Hat seeks to play a role in transforming innovative upstream open source projects into robust, enterprise-grade solutions for its customers. This is done in many ways such as testing and quality assurance, while ensuring stability, security, and performance suitable for mission-critical deployments. We continue to actively participate in and contribute to the upstream Kueue community, but at the same time we want to give our customers something that they can use in their production environments.

With all the above in mind, we're announcing the general availability (GA) of Red Hat’s build of Kueue. This operator is available through our Red Hat catalog, making installation and integration into your OpenShift environment straightforward. Furthermore, it's a core component of the OpenShift offering. Whether you use it with our premier Red Hat OpenShift for AI offering or for standalone batch processing, Red Hat’s build of Kueue is just a few clicks and some YAML away.

What next?

If you're interested in trying it for yourself, grab the Red Hat build of Kueue operator, read the documentation, get it all configured and running, and push in some of your own jobs. Let us know how you get on, what use cases or workloads you are going to run, and what capabilities you would like to see next. Even better, get involved in the Kueue community working on new and exciting features.

제품 체험판

Red Hat OpenShift Container Platform | 제품 체험판

컨테이너화된 애플리케이션을 빌드하고 규모를 확장하기 위한 일관된 하이브리드 클라우드 기반입니다.

저자 소개

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래