Jump to section

What is a Kubernetes pod?

Copy URL

Red Hat named a Leader in the 2024 Gartner® Magic Quadrant™ for second consecutive year

Red Hat was named a Leader in the Gartner 2024 Magic Quadrant for Container Management. This year, Red Hat was positioned furthest on the Completeness of Vision axis.

A Kubernetes pod is a collection of one or more Linux® containers, and is the smallest unit of a Kubernetes application. Any given pod can be composed of multiple, tightly coupled containers (an advanced use case) or just a single container (a more common use case). Containers are grouped into Kubernetes pods in order to increase the intelligence of resource sharing, as described below.

Within the Kubernetes system, containers in the same pod will share the same compute resources. These compute resources are pooled together in Kubernetes to form clusters, which can provide a more powerful and intelligently distributed system for executing applications. The pieces of Kubernetes, from containers to pods and nodes to clusters, can be challenging to understand at first, but the most relevant pieces to understanding the benefits of Kubernetes pods break down as follows:

Hardware units

Node: the smallest unit of computing hardware in Kubernetes, easily thought of as one individual machine.

Cluster: a collection of nodes that are grouped together to provide intelligent resources sharing and balancing.

Software units

Linux container: a set of one or more processes, including all necessary files to run, making them portable across machines.

Kubernetes pod: a collection of one or more Linux containers, packaged together to maximize the benefits of resource sharing via cluster management.

In essence, individual hardware is represented in Kubernetes as a node. Multiple of those nodes are collected into clusters, allowing compute power to be distributed as needed. Running on those clusters are pods, which ensures that any tightly coupled containers within them will be run together on the same cluster

The relationship of pods to clusters is why Kubernetes does not run containers directly, instead running pods to ensure that each container within them shares the same resources and local network. Grouping containers in this way allows them to communicate between each other as if they shared the same physical hardware, while still remaining isolated to some degree.

This organization of containers into pods is the basis for one of Kubernetes’ well-known features: replication. When containers are organized into pods, Kubernetes can use replication controllers to horizontally scale an application as needed. In effect, this means that if a single pod becomes overloaded, Kubernetes can automatically replicate it and deploy it to the cluster. In addition to supporting healthy functioning during periods of heavy load, Kubernetes pods are also often replicated continuously to provide failure resistance to the system

There's a lot more to do with Kubernetes.

Maximizing the benefit of reusable elements, like pods, is a core benefit of the Kubernetes system. It can take years of trial and error to discover the best uses of Kubernetes in production environments—years that most organizations do not have in the age of rapidly deployed cloud-native applications.

However, because of the open standards foundation that Kubernetes is built on, patterns of success (and failure) have emerged through the trial and error of early adopters. These patterns offer replicable designs that many organizations can use to speed up their early adoption efforts.

Presented by authors Bilgin Ibryam and Roland Huß and provided through O’Reilly, Kubernetes patterns: Reusable elements for designing cloud-native applications offers a detailed presentation of common reusable elements, patterns, principles, and practices for designing and implementing cloud-native applications on Kubernetes.

Keep reading

Article

What's a Linux container?

A Linux container is a set of processes isolated from the system, running from a distinct image that provides all the files necessary to support the processes.

Article

Containers vs VMs

Linux containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.

Article

What is container orchestration?

Container orchestration automates the deployment, management, scaling, and networking of containers.

More about containers

Products

An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.

Resources

Podcast

Command Line Heroes Season 1, Episode 5:
"The Containers Derby"

E-Book

Boost agility with hybrid cloud and containers

Training

Free training course

Running Containers with Red Hat Technical Overview

Free training course

Containers, Kubernetes and Red Hat OpenShift Technical Overview

Free training course

Developing Cloud-Native Applications with Microservices Architectures