Kueue is a community-driven project with the goal to develop a resource management system for Kubernetes that efficiently manages batch workloads. It enables sharing a Kubernetes cluster among teams and users in a fair, cost-effective, and efficient way.

This blog post delves into what Kueue is, what its main use cases are, and introduces the Red Hat build of Kueue, a new operator from the Red Hat OpenShift team. 

What is Kueue?

A diagram showing the major components that make up Kueue: The batch workload that contains the jobs, the Workload that defines the jobs to be processed and their requirements, the LocalQueue that you submit the jobs to and the ClusterQueue which defines pools of resources and the availability/limits

Kueue operates by acting as a central controller that queues jobs and decides when and where they should run. It is designed to manage batch jobs effectively by providing advanced queuing, quota management, and resource sharing capabilities. Kueue helps in optimizing cluster utilization and ensuring that resources are allocated in a fair and efficient manner.

Kueue is particularly useful in the AI space but there are other use cases.

AI use cases for Kueue

You can't examine Kueue use cases without considering AI. While AI seems to insert itself into every topic and conversation, Kueue really has a highly relevant part to play. Kueue is key in managing and optimizing AI workloads within Kubernetes environments. To put it another way, if you are doing AI related work then you should be using Kueue. Here are some of the main AI use cases:

  • Job scheduling and prioritization: Kueue allows for the efficient scheduling and prioritization of AI training and inference jobs. This ensures that critical tasks are completed first and resources are allocated based on priority.
  • Resource management: AI workloads often require substantial computational resources, including GPUs and specialized hardware. Kueue manages these resources, ensuring that they are available when needed and are effectively utilized.
  • Quota management: For organizations with multiple teams running AI workloads, Kueue helps manage resource quotas, preventing any single team from monopolizing resources and ensuring fair access for everyone.
  • Cost optimization: By efficiently managing and scheduling jobs, Kueue helps reduce idle time and optimize resource utilization, leading to significant cost savings.

Other use cases for Kueue

Of course it doesn't have to be all about AI. There are many other use cases that benefit from having Kueue in the stack. So while Kueue is great for AI workloads, there is life beyond AI, and Kueue is here to help.

  • Batch processing: Kueue can efficiently manage various batch processing jobs, such as data processing, simulations, and ETL (extract, transform, load) tasks.
  • High-performance computing (HPC): For HPC applications that require large amounts of computational resources, Kueue can manage job scheduling and resource allocation to optimize performance.
  • CI/CD pipelines: Kueue can be integrated into CI/CD pipelines to manage the execution of build, test, and deployment jobs, ensuring efficient resource utilization.
  • Financial modeling: In finance, complex models and simulations often require significant computational resources. Kueue can manage these jobs efficiently, ensuring timely completion and optimal resource use.

Introducing Red Hat Build of Kueue for OpenShift

Red Hat seeks to play a role in transforming innovative upstream open source projects into robust, enterprise-grade solutions for its customers. This is done in many ways such as testing and quality assurance, while ensuring stability, security, and performance suitable for mission-critical deployments. We continue to actively participate in and contribute to the upstream Kueue community, but at the same time we want to give our customers something that they can use in their production environments.

With all the above in mind, we're announcing the general availability (GA) of Red Hat’s build of Kueue. This operator is available through our Red Hat catalog, making installation and integration into your OpenShift environment straightforward. Furthermore, it's a core component of the OpenShift offering. Whether you use it with our premier Red Hat OpenShift for AI offering or for standalone batch processing, Red Hat’s build of Kueue is just a few clicks and some YAML away.

What next?

If you're interested in trying it for yourself, grab the Red Hat build of Kueue operator, read the documentation, get it all configured and running, and push in some of your own jobs. Let us know how you get on, what use cases or workloads you are going to run, and what capabilities you would like to see next. Even better, get involved in the Kueue community working on new and exciting features.

产品试用

红帽 OpenShift 容器平台 | 产品试用

为构建和扩展容器化应用提供一致的混合云基础。

关于作者

UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Virtualization icon

虚拟化

适用于您的本地或跨云工作负载的企业虚拟化的未来