Kueue is a community-driven project with the goal to develop a resource management system for Kubernetes that efficiently manages batch workloads. It enables sharing a Kubernetes cluster among teams and users in a fair, cost-effective, and efficient way.

This blog post delves into what Kueue is, what its main use cases are, and introduces the Red Hat build of Kueue, a new operator from the Red Hat OpenShift team. 

What is Kueue?

A diagram showing the major components that make up Kueue: The batch workload that contains the jobs, the Workload that defines the jobs to be processed and their requirements, the LocalQueue that you submit the jobs to and the ClusterQueue which defines pools of resources and the availability/limits

Kueue operates by acting as a central controller that queues jobs and decides when and where they should run. It is designed to manage batch jobs effectively by providing advanced queuing, quota management, and resource sharing capabilities. Kueue helps in optimizing cluster utilization and ensuring that resources are allocated in a fair and efficient manner.

Kueue is particularly useful in the AI space but there are other use cases.

AI use cases for Kueue

You can't examine Kueue use cases without considering AI. While AI seems to insert itself into every topic and conversation, Kueue really has a highly relevant part to play. Kueue is key in managing and optimizing AI workloads within Kubernetes environments. To put it another way, if you are doing AI related work then you should be using Kueue. Here are some of the main AI use cases:

  • Job scheduling and prioritization: Kueue allows for the efficient scheduling and prioritization of AI training and inference jobs. This ensures that critical tasks are completed first and resources are allocated based on priority.
  • Resource management: AI workloads often require substantial computational resources, including GPUs and specialized hardware. Kueue manages these resources, ensuring that they are available when needed and are effectively utilized.
  • Quota management: For organizations with multiple teams running AI workloads, Kueue helps manage resource quotas, preventing any single team from monopolizing resources and ensuring fair access for everyone.
  • Cost optimization: By efficiently managing and scheduling jobs, Kueue helps reduce idle time and optimize resource utilization, leading to significant cost savings.

Other use cases for Kueue

Of course it doesn't have to be all about AI. There are many other use cases that benefit from having Kueue in the stack. So while Kueue is great for AI workloads, there is life beyond AI, and Kueue is here to help.

  • Batch processing: Kueue can efficiently manage various batch processing jobs, such as data processing, simulations, and ETL (extract, transform, load) tasks.
  • High-performance computing (HPC): For HPC applications that require large amounts of computational resources, Kueue can manage job scheduling and resource allocation to optimize performance.
  • CI/CD pipelines: Kueue can be integrated into CI/CD pipelines to manage the execution of build, test, and deployment jobs, ensuring efficient resource utilization.
  • Financial modeling: In finance, complex models and simulations often require significant computational resources. Kueue can manage these jobs efficiently, ensuring timely completion and optimal resource use.

Introducing Red Hat Build of Kueue for OpenShift

Red Hat seeks to play a role in transforming innovative upstream open source projects into robust, enterprise-grade solutions for its customers. This is done in many ways such as testing and quality assurance, while ensuring stability, security, and performance suitable for mission-critical deployments. We continue to actively participate in and contribute to the upstream Kueue community, but at the same time we want to give our customers something that they can use in their production environments.

With all the above in mind, we're announcing the general availability (GA) of Red Hat’s build of Kueue. This operator is available through our Red Hat catalog, making installation and integration into your OpenShift environment straightforward. Furthermore, it's a core component of the OpenShift offering. Whether you use it with our premier Red Hat OpenShift for AI offering or for standalone batch processing, Red Hat’s build of Kueue is just a few clicks and some YAML away.

What next?

If you're interested in trying it for yourself, grab the Red Hat build of Kueue operator, read the documentation, get it all configured and running, and push in some of your own jobs. Let us know how you get on, what use cases or workloads you are going to run, and what capabilities you would like to see next. Even better, get involved in the Kueue community working on new and exciting features.

製品トライアル

Red Hat OpenShift Container Platform | 製品トライアル

コンテナ化アプリケーションの構築とスケーリングに適した、一貫性のあるハイブリッドクラウド基盤です。

執筆者紹介

UI_Icon-Red_Hat-Close-A-Black-RGB

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Virtualization icon

仮想化

オンプレミスまたは複数クラウドでのワークロードに対応するエンタープライズ仮想化の将来についてご覧ください