Jump to section

What is the Kubernetes API?

Copy URL

The Kubernetes API is the front end of the Kubernetes control plane and is how users interact with their Kubernetes cluster. The API (application programming interface) server determines if a request is valid and then processes it.

In essence, the API is the interface used to manage, create, and configure Kubernetes clusters. It's how the users, external components, and parts of your cluster all communicate with each other.

At the center of the Kubernetes control plane is the API server and the HTTP API that it exposes, allowing you to query and manipulate the state of Kubernetes objects. 

To understand the full context of the Kubernetes API, let’s back up and take a high-level look at what Kubernetes is. 

Kubernetes is an open source platform for orchestrating containers. A container is technology that lets you bundle and isolate applications with their entire runtime environment so that it’s easy to move the contained application between stages (development, production, etc.) and environments (on-premise, public cloud, private cloud, hybrid cloud, or multicloud) while retaining full functionality. 

So as a container orchestration platform, Kubernetes automates a lot of the manual processes involved in managing, deploying, and scaling containerized apps.

By grouping together the machines (physical or virtual servers known as "nodes") running the containerized apps, you create a cluster, which you then manage and orchestrate with Kubernetes.

A group of containers running on a single machine or node, and sharing resources, is known as a "pod" though a pod can also only have one container, in which case you can replace the word "pod" with the word "container" and still have the correct concept.

A Kubernetes cluster has 2 parts: the control plane and the application plane. The control plane is where we find the API, which is how the user interacts with the cluster and tells it what to do within the command line, using kubectl (a command line tool). Through the API, end users, the cluster itself, and external components can communicate with each other. 

Clusters each have a desired state, meaning a state which defines which apps or workloads should be running, as well as other configuration details like which images they use and which resources they need. The desired state of a cluster is set using the API, either through the command line’s kubectl tool or by interacting with the cluster via the API to set or modify the desired state.

Kubernetes will automatically manage your cluster to match the desired state. Kubernetes is declarative in that it will always attempt to self-manage and self-heal based on the parameters given to its workloads.

A Kubernetes operator is a method of packaging, deploying, and managing an app by using the API via kubectl tooling. 

In Kubernetes, an operator is an application-specific controller that extends the functionality of the Kubernetes API to create, configure, and manage instances of complex applications on behalf of the user. By including domain or app-specific information, an operator makes it possible for Kubernetes to automate the entire life cycle of the software it manages.

As a leader of open source container technology and a creator of multiple tools and products to manage your container infrastructure, Red Hat helps bring Kubernetes and containers to your enterprise.

With Red Hat® OpenShift®, you get an open source container platform that is enterprise-ready, with everything you need to manage hybrid cloud and multicloud deployments. We can help you transition your business to the cloud, while still getting the most from your current infrastructure. 

As part of a single, integrated platform, developers can choose the language, middleware, frameworks, and databases, while also deploying automation to increase efficiency and productivity. 

Keep reading


What's a Linux container?

A Linux container is a set of processes isolated from the system, running from a distinct image that provides all the files necessary to support the processes.


Containers vs VMs

Linux containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.


What is container orchestration?

Container orchestration automates the deployment, management, scaling, and networking of containers.

More about containers


An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.



Command Line Heroes Season 1, Episode 5:
"The Containers Derby"


Boost agility with hybrid cloud and containers


Free training course

Running Containers with Red Hat Technical Overview

Free training course

Containers, Kubernetes and Red Hat OpenShift Technical Overview

Free training course

Developing Cloud-Native Applications with Microservices Architectures