0_0EfTwn4tIWFGER7-

Remember the days when the hot buzz about Kubernetes was that it had become boring? Naturally, after becoming the default basis for container-based infrastructure, there would need to be a period where the project looked inward, reduced instability and laid the foundation for future innovation in this amazing open source project.

Those days are definitely over. That foundation has been laid, and taking a quick look at the amount of alpha and beta features in Kubernetes 1.21, it’s clear that a new generation of capabilities and features has begun to take root.

At its core, Kubernetes 1.21 was really about API standardization and management, and the overall ability to better schedule work. The workload changes in this version culminate in the stable release two major APIs: CronJobs and PodDisruptionBudgets. Just as on Linux, CronJob allows users to schedule work on their clusters at intervals.

Timing is a major theme, here, as Kubernetes 1.21 is bolstered by a number of alpha and beta features that will greatly expand the capabilities of scheduled jobs. The alpha Memory Manager and beta Storage Capacity Tracking for pod scheduling features combine to give users a solid, pre-set amount of RAM and storage, respectively, pre-identified inside pods before specified jobs are run.

So, now, not only can jobs be reliably scheduled to run, they can also be pre-configured to demand set amounts of RAM and storage capacity from the cluster before they run. This helps for high performance applications with extreme hardware demands.

Additionally, a slew of new alpha and beta features around terminating nodes are coming online in this release as well. The alpha Suspend/Resume Jobs feature will allow exactly what it says. This clashes a bit with the “herd of cattle” mentality around containers, but terminating a job will also remove all the metadata and state associated with it. Suspend/Resume gives users another option.

Another aspect of Suspend/Resume is that jobs can also now be spun up and work enacted upon them before they are made available on the cluster. This gives further options to users, who might want to inject a dependency into a node before bringing the job online.

Draining nodes and killing jobs also becomes more configurable in Kubernetes 1.21, as the beta Graceful Node Shutdown feature allows users to set a timer for node shutdowns, allowing pods to more cleanly stop nodes, rather than just yanking them immediately. 

One of the stable additions in 1.21 is the Pod Disruption Budget, which allows users to limit how many pods are down simultaneously. This is particularly useful for performing rolling cluster upgrades, as a set portion of the cluster can be configured to remain online while the upgrade is occurring.

Indeed, a great many of these new alpha and beta features are targeted at those who need to configure their clusters for specific use cases. That doesn’t mean they are any less powerful, as is the case with the new beta support for IPv4/IPv6 dual stack support. Clusters can now utilize both, rather than either or, as was the case before. 

Another feature that’s in alpha in 1.21 is the service type of “LoadBalancer.” This will allow administrators to use any load balancer they’d like within their clusters, rather than being bound to a single load balancer per cluster.

With all the popularity of Kubernetes, and the rapid growth of clusters around the world, it’s become obvious that managing end points is becoming a bit more complicated than storing them all in a single file and querying the list. Thus, the EndPointSlice capability, which is GA in 1.21, allows that end point list to be sliced up in a number of ways, such as namespaces, so searching for and managing those endpoints won’t be such an arduous task.


Über den Autor

UI_Icon-Red_Hat-Close-A-Black-RGB

Nach Thema durchsuchen

automation icon

Automatisierung

Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen

AI icon

Künstliche Intelligenz

Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen

open hybrid cloud icon

Open Hybrid Cloud

Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.

security icon

Sicherheit

Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren

edge icon

Edge Computing

Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen

Infrastructure icon

Infrastruktur

Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen

application development icon

Anwendungen

Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen

Virtualization icon

Virtualisierung

Erfahren Sie das Neueste über die Virtualisierung von Workloads in Cloud- oder On-Premise-Umgebungen