Overview
A Kubernetes pod is a collection of one or more Linux® containers, and is the smallest unit of a Kubernetes application. Any given pod can be composed of multiple, tightly coupled containers (an advanced use case) or just a single container (a more common use case). Containers are grouped into Kubernetes pods in order to increase the intelligence of resource sharing, as described below.
Within the Kubernetes system, containers in the same pod will share the same compute resources. These compute resources are pooled together in Kubernetes to form clusters, which can provide a more powerful and intelligently distributed system for executing applications. The pieces of Kubernetes, from containers to pods and nodes to clusters, can be challenging to understand at first, but the most relevant pieces to understanding the benefits of Kubernetes pods break down as follows:
Hardware units
Node: the smallest unit of computing hardware in Kubernetes, easily thought of as one individual machine.
Cluster: a collection of nodes that are grouped together to provide intelligent resources sharing and balancing.
Software units
Linux container: a set of one or more processes, including all necessary files to run, making them portable across machines.
Kubernetes pod: a collection of one or more Linux containers, packaged together to maximize the benefits of resource sharing via cluster management.
In essence, individual hardware is represented in Kubernetes as a node. Multiple of those nodes are collected into clusters, allowing compute power to be distributed as needed. Running on those clusters are pods, which ensures that any tightly coupled containers within them will be run together on the same cluster
Why does Kubernetes use pods?
The relationship of pods to clusters is why Kubernetes does not run containers directly, instead running pods to ensure that each container within them shares the same resources and local network. Grouping containers in this way allows them to communicate between each other as if they shared the same physical hardware, while still remaining isolated to some degree.
This organization of containers into pods is the basis for one of Kubernetes’ well-known features: replication. When containers are organized into pods, Kubernetes can use replication controllers to horizontally scale an application as needed. In effect, this means that if a single pod becomes overloaded, Kubernetes can automatically replicate it and deploy it to the cluster. In addition to supporting healthy functioning during periods of heavy load, Kubernetes pods are also often replicated continuously to provide failure resistance to the system
There's a lot more to do with Kubernetes.
What are Kubernetes patterns?
Maximizing the benefit of reusable elements, like pods, is a core benefit of the Kubernetes system. It can take years of trial and error to discover the best uses of Kubernetes in production environments—years that most organizations do not have in the age of rapidly deployed cloud-native applications.
However, because of the open standards foundation that Kubernetes is built on, patterns of success (and failure) have emerged through the trial and error of early adopters. These patterns offer replicable designs that many organizations can use to speed up their early adoption efforts.
Presented by authors Bilgin Ibryam and Roland Huß and provided through O’Reilly, Kubernetes patterns: Reusable elements for designing cloud-native applications offers a detailed presentation of common reusable elements, patterns, principles, and practices for designing and implementing cloud-native applications on Kubernetes.