Skip to main content

Architect event-driven autoscaling for serverless Java with Kubernetes

Try a new sandbox project to integrate Kubernetes Event-Driven Autoscaling (KEDA) and Knative for event-driven serverless applications.
Image
Burst of light

Photo by CHUTTERSNAP on Unsplash

What is the hidden truth of Kubernetes for you as a platform architect when business applications must be scaled out based on network traffic spikes? You might say that this shouldn't be a problem because the Horizontal Pod Autoscaler (HPA) in Kubernetes can scale in and out containerized applications dynamically and automatically to match the incoming network traffic. This capability makes your life easier without unexpected system overhead or errors. For example, you can create multiple HPA resources for each application to manage autoscaling, as shown in this diagram.

Image
Kubernetes Autoscaling Architecture
Autoscaling architecture on Kubernetes (Daniel Oh, CC BY-SA 4.0)

However, what if this autoscaling architecture in Kubernetes doesn't work when it is managed by external services (for example, Apache Kafka) to process event-driven metrics? This event-driven architecture is one of the most popular cloud-native architectures to run microservices on Kubernetes.

This article explains how platform architects can redesign the autoscaling architecture for event-driven applications for a normal workload and serverless functions on top of Kubernetes.

[ Use distributed, modular, and portable components to gain technical and business advantages. Download Event-driven architecture for a hybrid cloud blueprint. ]

Understand Kubernetes custom metrics

Your first question should be, "Can't Kubernetes handle autoscaling by triggering external events?" A short answer is "yes" if you implement a custom metric adaptor for the external services, as seen below. However, there's a limitation that you can't enable multiple metric adaptors in one namespace even if each business service needs to scale out by triggering different external metrics. For example, you can only consume the metrics of Prometheus for one of your applications (such as an order service).

Image
Kubernetes Autoscaling Architecture with External Services
Autoscaling architecture on Kubernetes with external services (Daniel Oh, CC BY-SA 4.0)

[ Learn more about autoscaling in Red Hat OpenShift on AWS (ROSA) in this tutorial. ]

Redesign autoscaling infrastructure for event-driven applications

Kubernetes Event-Driven Autoscaling (KEDA) is straightforward to scale standard Kubernetes resources, such as deployments, jobs, and custom resources, automatically. It provides 61+ built-in scalers, and users can build their own external scalers. KEDA does not manipulate the data, it just scales the workload.

With KEDA, you can redesign your autoscaling infrastructure on Kubernetes, as shown in the following diagram. You don't need to create HPA resources for each application. ScaledObject and ScaledJob in KEDA can scrape external event metrics and let your application scale out and in automatically.

Image
Redesign Autoscaling Architecture with Keda
Designing an autoscaling infrastructure with KEDA (Daniel Oh, CC BY-SA 4.0)

However, KEDA can't manage serverless functionality on Kubernetes yet. You might consider the Knative project that allows you to deploy an application as a serverless function using a Knative Service. To deploy your existing applications as Knative Services, you need to improve your applications for event-driven architecture with CloudEvents and rewrite the Kubernetes manifest based on the Knative Service rather than using standard Kubernetes resources. This is not what KEDA lets you do.

What if you could combine these two great projects, KEDA and Knative, to manage your event-driven autoscaling for serverless applications?

Integrate KEDA with Knative

There is a new sandbox project to integrate KEDA and Knative for event-driven serverless applications. With it, you can use KEDA to autoscale Knative Eventing infrastructure, such as Knative Eventing Sources and channels. The autoscaling allows the infrastructure to handle higher loads or save resources by scaling to 0 when idle.

For example, when you need to scale your application based on external Apache Kafka, you can autoscale the serverless functions faster and more flexibly, along with massive events instantly and dynamically.

Image
Keda and Knative integration architecture
KEDA with Knative integration architecture (Daniel Oh, CC BY-SA 4.0)

With this architecture, you can choose any programming language, but I suggest using Quarkus, a Kubernetes-native Java framework. It enables you to build and deploy Java applications as Knative Services directly by generating Kubernetes YAML files and packaging a container image automatically. Also, you can configure KEDA and Knative specifications programmatically. Find more information about tutorials and sample applications in my GitHub repository.

Wrap up

You learned how to redesign the event-driven autoscaling architecture on Kubernetes for serverless Java applications using KEDA and Knative integration. Quarkus can simplify Knative service deployment and KEDA configurations for Java developers and cloud architects. I will be presenting on this topic at several conferences this year, including Devnexus and Microservices Day; consult my LinkedIn (linked in my bio) for opportunities to learn more. You can also find relevant tutorials and demo videos on my YouTube channel.

[ Learning path: Getting started with Red Hat OpenShift Service on AWS (ROSA)

Author’s photo

Daniel Oh

Daniel Oh works for Red Hat as Senior Principal Technical Marketing Manager and is also in charge of the CNCF ambassador to encourage developers' participation in cloud-native app development at scale and speed. More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

Privacy Statement