Jump to section

Introduction to Kubernetes patterns

Copy URL

Red Hat named a Leader in the 2023 Gartner® Magic Quadrant™

Red Hat was positioned highest for ability to execute and furthest for completeness of vision in the Gartner 2023 Magic Quadrant for Container Management.

A pattern describes a repeatable solution to a problem. Kubernetes patterns are design patterns for container-based applications and services.  

Kubernetes can help developers write cloud-native apps, and it provides a library of application programing interfaces (APIs) and tools for building applications. 

However, Kubernetes doesn’t provide developers and architects with guidelines for how to use these pieces to build a complete system that meets business needs and goals. 

Patterns are a way to reuse architectures. Instead of completely creating the architecture yourself, you can use existing Kubernetes patterns, which also ensure that things will work the way they’re supposed to. 

When you are trying to deliver important business services on top of Kubernetes, learning through trial and error is too time-consuming, and can result in problems like downtime and disruption. 

Think of a pattern like a blueprint; it shows you the way to solve a whole class of similar problems. A pattern is more than just step-by-step instructions for fixing one specific problem.

Using a pattern may result in somewhat different outcomes; they aren’t meant to provide an identical solution. Your system may look different from someone else who used the same pattern. However, both systems will share common characteristics. 

By using Kubernetes patterns, developers can create cloud-native apps with Kubernetes as a runtime platform.

Predictable demands patterns

Predictable demands patterns are foundational Kubernetes patterns. This type of pattern ensures that your apps comply with the fundamental principles of containerized apps so that they are ready to be automated using Kubernetes. 

Predictable demands patterns explain why every container needs to declare the app resource requirements and runtime dependencies. Defining these requirements allows Kubernetes to choose the right place to deploy the app within your cluster

Examples of what you can define using these patterns are runtime dependencies, resource profiles, pod priority, and project resources.

Example: Resource profiles

You will need to specify the resource requirements, such as CPU and memory, of a container in the form of a request and a limit. A request refers to the minimum amount of resources needed, while a limit refers to the maximum amount of resources a container can consume. 

The requests amount is used by the scheduler when placing pods to nodes. The scheduler will only schedule a pod to a node that has enough capacity to accommodate it.  

If resource requirements aren’t set, the container will be considered a lower priority and killed first if the node runs out of available resources.

Configuration patterns

All applications require configuration, and although storing configurations in the source code is an easy option, it doesn’t give you the flexibility to adapt configuration without recreating the app image. External configuration allows you to adapt based on the environment.

Configuration patterns will help you to customize and adapt your apps with external configurations for different development, integration, and production environments. 

Example: EnVar configuration

The EnVar configuration pattern works best for small sets of configuration variables, where universally supported environmental variables can be used to externalize configuration. 

Externalizing the configuration of an app allows you to make changes to configuration even after the app has been built, compared with hardcoded configuration that would require a rebuild of the app. 

Using environmental variables to externalize configuration works well because any operating system can define these variables and they are accessible by any programming language. 

With environmental variables, hardcoded default values are typically defined during build and then overwritten during runtime.

In Kubernetes, variables can be set directly in the pod specification of a controller like deployment or replica set. You can attach values directly to environmental variables, which can be managed separately from the pod definition.

You can also use a delegation to Kubernetes Secrets (for sensitive data) and ConfigMaps (for non-sensitive configuration).

Advanced patterns

These patterns include complex topics and the newest pattern implementations. The controller, operator, elastic scale, and image builder patterns are all examples of advanced Kubernetes patterns.

Example: Elastic scale 

The elastic scale pattern is used to scale an application horizontally, by adapting the number of pod replicas, vertically, by adapting resource requirements for pods, and can scale the cluster itself by changing the number of cluster nodes.

Although you can handle scale manually, the elastic scale pattern allows Kubernetes to scale automatically based on load. 

With Kubernetes, you can change a container’s resources, the desired replicas for a service, or the number of nodes in the cluster. It can also monitor external load and capacity-related events, analyze the container state, and scale for desired performance. 

Horizontal pod autoscaling allows you to define an app capacity that is not fixed, but that has enough capacity to handle a varying load. A horizontal pod autoscaler is used to scale pods. 

In order to use the horizontal pod autoscaler, the metrics server, a cluster-wide aggregator of resource usage data, needs to be enabled, and a CPU resource limit needs to be defined. You can create a definition for the horizontal pod autoscaler from the command line. 

The horizontal pod autoscaler controller continuously retrieves metrics about the pods that are related to scaling, based on the previously determined definition you set from the command line. 

It also calculates the required number of replicas based on the current value and the desired value, and changes declared replicas to maintain the new desired state

Red Hat® OpenShift® is an enterprise-ready Kubernetes platform. It gives developers self-service environments for building, and full-stack automated operations on any infrastructure.

Red Hat OpenShift includes all of the extra pieces of technology that makes Kubernetes powerful and viable for the enterprise, including: registry, networking, telemetry, security, automation, and services.

With Red Hat OpenShift, developers can make new containerized apps, host them, and deploy them in the cloud with the scalability, control, and orchestration that can turn a good idea into new business quickly and easily.

Keep reading

Article

Stateful vs stateless

Whether something is stateful or stateless depends on how long the state of interaction with it is being recorded and how that information needs to be stored.

Article

What is Quarkus?

Quarkus is a Kubernetes-native Java stack made for Java virtual machines (JVMs) and native compilation, optimizing Java specifically for containers.

Article

What is serverless?

Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers.

More about cloud-native applications

Products

An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.

Resources

Training

Free training

Developing Cloud-Native Applications with Microservices Architectures