Search

English

English

Log in

Log in/Register

Websites

Containers

What is a Kubernetes cluster?

A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster.

At a minimum, a cluster contains a worker node and a master node. The master node is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. Worker nodes actually run the applications and workloads.

The cluster is the heart of Kubernetes’ key advantage: the ability to schedule and run containers across a group of machines, be they physical or virtual, on premises or in the cloud. Kubernetes containers aren’t tied to individual machines. Rather, they’re abstracted across the cluster.

How do you work with a Kubernetes cluster?

A Kubernetes cluster has a desired state, which defines which applications or other workloads should be running, along with which images they use, which resources should be made available for them, and other such configuration details.

A desired state is defined by configuration files made up of manifests, which are JSON or YAML files that declare the type of application to run and how many replicas are required to run a healthy system.

The cluster’s desired state is defined with the Kubernetes API. This can be done from the command line (using kubectl) or by using the API to interact with the cluster to set or modify your desired state.

Kubernetes will automatically manage your cluster to match the desired state. As a simple example, suppose you deploy an application with a desired state of "3," meaning 3 replicas of the application should be running. If 1 of those containers crashes, Kubernetes will see that only 2 replicas are running, so it will add 1 more to satisfy the desired state.

You can also use Kubernetes patterns to manage the scale of your cluster automatically based on load. 

How does a cluster relate to a node, a pod, and other Kubernetes terms?

We’ve defined a cluster as a set of nodes. Let’s look at a few other Kubernetes terms that are helpful to understanding what a cluster does.

Master node: The machine that controls Kubernetes nodes. This is where all task assignments originate.

Worker nodes: These machines perform the requested, assigned tasks. The Kubernetes master controls them.

Pod: A set of 1 or more containers deployed to a single node. A pod is the smallest and simplest Kubernetes object.

Service: A way to expose an application running on a set of pods as a network service. This decouples work definitions from the pods.

Volume: A directory containing data, accessible to the containers in a pod. A Kubernetes volume has the same lifetime as the pod that encloses it. A volume outlives any containers that run within the pod, and data is preserved when a container restarts.

Namespace: A virtual cluster. Namespaces allow Kubernetes to manage multiple clusters (for multiple teams or projects) within the same physical cluster.

Why choose Red Hat OpenShift for Kubernetes?

Red Hat is a leader and active builder of open source container technology, including Kubernetes, and creates essential tools for securing, simplifying, and automatically updating your container infrastructure. 

Red Hat® OpenShift® is an enterprise-grade Kubernetes distribution. With Red Hat OpenShift, teams gain a single, integrated platform for operations and development teams. Red Hat OpenShift offers developers their choice of languages, frameworks, middleware, and databases, along with build and deploy automation through CI/CD to supercharge productivity.

Discover Red Hat OpenShift