Remember the days when the hot buzz about Kubernetes was that it had become boring? Naturally, after becoming the default basis for container-based infrastructure, there would need to be a period where the project looked inward, reduced instability and laid the foundation for future innovation in this amazing open source project.
Those days are definitely over. That foundation has been laid, and taking a quick look at the amount of alpha and beta features in Kubernetes 1.21, it’s clear that a new generation of capabilities and features has begun to take root.
At its core, Kubernetes 1.21 was really about API standardization and management, and the overall ability to better schedule work. The workload changes in this version culminate in the stable release two major APIs: CronJobs and PodDisruptionBudgets. Just as on Linux, CronJob allows users to schedule work on their clusters at intervals.
Timing is a major theme, here, as Kubernetes 1.21 is bolstered by a number of alpha and beta features that will greatly expand the capabilities of scheduled jobs. The alpha Memory Manager and beta Storage Capacity Tracking for pod scheduling features combine to give users a solid, pre-set amount of RAM and storage, respectively, pre-identified inside pods before specified jobs are run.
So, now, not only can jobs be reliably scheduled to run, they can also be pre-configured to demand set amounts of RAM and storage capacity from the cluster before they run. This helps for high performance applications with extreme hardware demands.
Additionally, a slew of new alpha and beta features around terminating nodes are coming online in this release as well. The alpha Suspend/Resume Jobs feature will allow exactly what it says. This clashes a bit with the “herd of cattle” mentality around containers, but terminating a job will also remove all the metadata and state associated with it. Suspend/Resume gives users another option.
Another aspect of Suspend/Resume is that jobs can also now be spun up and work enacted upon them before they are made available on the cluster. This gives further options to users, who might want to inject a dependency into a node before bringing the job online.
Draining nodes and killing jobs also becomes more configurable in Kubernetes 1.21, as the beta Graceful Node Shutdown feature allows users to set a timer for node shutdowns, allowing pods to more cleanly stop nodes, rather than just yanking them immediately.
One of the stable additions in 1.21 is the Pod Disruption Budget, which allows users to limit how many pods are down simultaneously. This is particularly useful for performing rolling cluster upgrades, as a set portion of the cluster can be configured to remain online while the upgrade is occurring.
Indeed, a great many of these new alpha and beta features are targeted at those who need to configure their clusters for specific use cases. That doesn’t mean they are any less powerful, as is the case with the new beta support for IPv4/IPv6 dual stack support. Clusters can now utilize both, rather than either or, as was the case before.
Another feature that’s in alpha in 1.21 is the service type of “LoadBalancer.” This will allow administrators to use any load balancer they’d like within their clusters, rather than being bound to a single load balancer per cluster.
With all the popularity of Kubernetes, and the rapid growth of clusters around the world, it’s become obvious that managing end points is becoming a bit more complicated than storing them all in a single file and querying the list. Thus, the EndPointSlice capability, which is GA in 1.21, allows that end point list to be sliced up in a number of ways, such as namespaces, so searching for and managing those endpoints won’t be such an arduous task.
About the author
More like this
Introducing OpenShift Service Mesh 3.2 with Istio’s ambient mode
Ford's keyless strategy for managing 200+ Red Hat OpenShift clusters
Open Source Hardware | Command Line Heroes
Air-gapped Networks | Compiler
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds