Skip to main content

Knative tips: How to expand event-driven autoscaling capabilities

Use Knative Serving and KEDA infrastructure to manage autoscaling based on event sources, rather than hardware utilization.
Diagram of circles and lines

Photo by Laura Meinhardt from Pexels

There are many reasons enterprises want to adopt Kubernetes to run web services, mobile applications, Internet of Things (IoT) edge streaming, artificial intelligence and machine learning (AI/ML), and other business applications. One of the biggest benefits of Kubernetes is the ability to autoscale your applications on-demand, as this reduces the amount of process time required to handle incidents. It also helps make your cloud platform more reliable and stable to serve business services seamlessly.

[ Download An architect's guide to multicloud infrastructure. ]

Kubernetes autoscaling works based on hardware resource utilization (CPU, memory) through Horizontal Pod Autoscaling (HPA). This creates a new challenge with event-driven architectures. In an event-driven architecture, you probably have multiple event sources, such as Apache Kafka and message queue brokers, to consume message streams. These metrics are more relevant than a pod's CPU usage for deciding when applications need to be scaled out and in.

Kubernetes Event-Driven Autoscaling (KEDA) is designed to solve this challenge by autoscaling existing deployed applications based on event metrics. Knative Serving can also scale serverless applications on Kubernetes using its own Knative autoscaler. But what if you need to manage the autoscaling capability from normal applications to serverless functions based on event sources?

Fortunately, there is a way to redesign an event-driven autoscaling architecture utilizing Knative and KEDA infrastructure. I'll be discussing this at Red Hat's Event-Driven Architecture event on April 19, 2022. In my presentation, Event-driven autoscaling through KEDA and Knative Integration, I'll also explain how to deploy serverless applications (Quarkus) using Knative Serving and KEDA to autoscale Knative Eventing components (KafkaSource) based on events consumption over standard resources (CPU, memory).

You can access the slides or watch the video from the event below.

Author’s photo

Daniel Oh

Daniel Oh works for Red Hat as Senior Principal Technical Marketing Manager and is also in charge of the CNCF ambassador to encourage developers' participation in cloud-native app development at scale and speed. More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.


Privacy Statement