Overview
A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster.
At a minimum, a cluster contains a control plane and one or more compute machines, or nodes. The control plane is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. Nodes actually run the applications and workloads.
The cluster is the heart of Kubernetes’ key advantage: the ability to schedule and run containers across a group of machines, be they physical or virtual, on premises or in the cloud. Kubernetes containers aren’t tied to individual machines. Rather, they’re abstracted across the cluster.
How do you work with a Kubernetes cluster?
A Kubernetes cluster has a desired state, which defines which applications or other workloads should be running, along with which images they use, which resources should be made available for them, and other such configuration details.
A desired state is defined by configuration files made up of manifests, which are JSON or YAML files that declare the type of application to run and how many replicas are required to run a healthy system.
The cluster’s desired state is defined with the Kubernetes API. This can be done from the command line (using kubectl) or by using the API to interact with the cluster to set or modify your desired state.
Kubernetes will automatically manage your cluster to match the desired state. As a simple example, suppose you deploy an application with a desired state of "3," meaning 3 replicas of the application should be running. If 1 of those containers crashes, Kubernetes will see that only 2 replicas are running, so it will add 1 more to satisfy the desired state.
You can also use Kubernetes patterns to manage the scale of your cluster automatically based on load.
How does a cluster relate to a node, a pod, and other Kubernetes terms?
We’ve defined a cluster as a set of nodes. Let’s look at a few other Kubernetes terms that are helpful to understanding what a cluster does.
Control plane: The collection of processes that control Kubernetes nodes. This is where all task assignments originate.
Nodes: These machines perform the requested tasks assigned by the control plane.
Pod: A set of 1 or more containers deployed to a single node. A pod is the smallest and simplest Kubernetes object.
Service: A way to expose an application running on a set of pods as a network service. This decouples work definitions from the pods.
Volume: A directory containing data, accessible to the containers in a pod. A Kubernetes volume has the same lifetime as the pod that encloses it. A volume outlives any containers that run within the pod, and data is preserved when a container restarts.
Namespace: A virtual cluster. Namespaces allow Kubernetes to manage multiple clusters (for multiple teams or projects) within the same physical cluster.
Red Hat Resources
What is Kubernetes cluster management?
With modern cloud-native applications, Kubernetes environments are becoming highly distributed. They can be deployed across multiple datacenters on-premise, in the public cloud, and at the edge.
Organizations that want to use Kubernetes at scale or in production will have multiple clusters, such as for development, testing, and production, distributed across environments and need to be able to manage them effectively.
Kubernetes cluster management is how an IT team manages a group of Kubernetes clusters.
Why choose Red Hat OpenShift for Kubernetes?
Red Hat is a leader and active builder of open source container technology, including Kubernetes, and creates essential tools for securing, simplifying, and automatically updating your container infrastructure.
Red Hat® OpenShift® is a unified platform to build, modernize, and deploy applications at scale. With Red Hat OpenShift, teams gain a single, integrated platform for operations and development teams. Developer-friendly workflows, including built-in CI/CD pipelines and source-to-image capability, enable you to go straight from application code to container. Built on Kubernetes, Red Hat OpenShift helps you work smarter and faster with a complete set of services for bringing apps to market on your choice of infrastructure.
Along with OpenShift, you can also use Red Hat Advanced Cluster Management and Red Hat Ansible® Automation Platform together to efficiently deploy and manage multiple Kubernetes clusters across regions, including public cloud, on-premise, and edge environments
The official Red Hat blog
Get the latest information about our ecosystem of customers, partners, and communities.