Subscribe to the feed

Congratulations to the Kubernetes community on reaching Kubernetes 1.20. Another solid release from the community, helping to contribute to increase in adoption of Kubernetes in production year over year (Kubernetes use in production has increased to 83%, up from 78% last year). In this release, the community worked on 42 enhancements (Alpha:16, Beta:15, Stable:11), looking at all of these enhancements closely. We wanted to highlight a few important ones.

For a full look into all the features of the release, visit the upstream release notes.

Node and Scheduling

In Kubernetes 1.20, support for 3rd party device monitoring plug-in has been graduated to stable. This provides cluster admins with container-level metrics for devices provided by device plug-ins. It also allows device vendors to provide device-specific metrics without contributing to core Kubernetes.

For slow-starting containers that are either terminated before being up or left deadlocked for a very long time before termination, Add pod-startup liveness-probe holdoff for slow-starting pods helps hold off other probes until the pod has finished startup.

Kubernetes cluster admin now has the ability to isolate pid resources pod-to-pod and node-to-node by Pid Limiting. This feature has been graduated to stable in this release.

There is an effort to deprecate Kubelet Feature: Disable AcceleratorUsage Metrics and replace it with PoDResources API.

Kubelet does not respect the probe timeout, so Fixing Kubelet Exec Probe Timeouts was introduced in this release. With this feature, nodes can be configured to preserve the current behavior while the proper timeouts are enabled for exec probes.

An increasing number of systems leverage a combination of CPUs and hardware accelerators to support latency-critical execution and high-throughput parallel computation. These include workloads in fields such as telecommunications, scientific computing, machine learning, financial services, and data analytics. Such hybrid systems comprise a high-performance environment. In order to extract the best performance, optimizations related to CPU isolation and memory and device locality are required. However, in Kubernetes, these optimizations are handled by a disjointed set of components. Node Topology Manager provides a mechanism to coordinate fine-grained hardware resource assignments for different components in Kubernetes.

Add a configurable default constraint to PodTopologySpread was introduced in 1.19 and graduated to beta in this release. With feature cluster operator, we can set default spreading constraints for workloads in the cluster.

CronJobs (previously ScheduledJobs) is useful to run periodic tasks using cron-like functionality in a kubernetes cluster. This feature is graduated to beta in this release.

Storage

A long-running alpha feature finally graduated to stable in this release, Snapshot / Restore Volume Support for Kubernetes (CRD + External Controller) provides API VolumeSnapshotContent and VolumeSnapshot that can provide snapshot capability to the cluster administrator

Add additional validation to Volume Snapshot objects feature allows volume ownership and permission changes to be skipped during mount so that applications that are sensitive to the permission bits changing can work properly. Also if volume has a large number of files, performing recursive chown and chmod can be slow or get timed out. This feature will skip the permission change and mitigate the slowness or timeout problem.

Allow CSI drivers to opt-in to volume ownership change adds a new field called CSIDriver.Spec.FSGroupPolicy that allows the driver to define if its supports volume ownership modifications via fsGroup.

Network

Allow users to set a pod’s hostname to its Fully Qualified Domain Name (FQDN) gives users the ability to set the hostname field of the kernel to the FQDN of a Pod.

SCTP support for Services, Pod, Endpoint, and NetworkPolicy was introduced as alpha in Kubernetes 1.12 and now graduated to stable in this release. Now POD, Endpoint, and NetworkPolicy will support SCTP protocol alongside with TCP and UDP

Kubernetes does not have a standardized way of representing application protocols. When a protocol is specified, it must be one of TCP, UDP, or SCTP. A concept of AppProtocol was added that would allow application protocols to be specified for each port. In this release, Adding AppProtocol to Services and Endpoints has been graduated to stable.

API and Auth

Deprecate and remove SelfLink is graduated to beta in this release will be stable in Kubernetes 1.21 release. This will eliminate performance impact caused by setting SelfLink

The goal of Built-in API Types Defaults is to add a new // +default marker to current built-in Go IDL. That marker will be transformed into the OpenAPI default field and then routed to defaulting functions so that defaulting can be done declaratively.

Today the apiserver has a simple mechanism for protecting itself against CPU and memory overloads: max-in-flight limits for mutating and for read-only requests. There can be undesirable scenarios where one subset of the request load crowds out other parts of the request load. To prevent this, Priority and Fairness for API Server Requests was introduced in Kubernetes 1.17 and now has been graduated to beta in this release

Kubernetes workloads can consume a variety of services from a variety of producers. They have a native identity (KSA), presented in a widely-compatible format (JWT). But only an API server can authenticate the KSA token, since only the API server has access to the public key verifying the signature. If services want to authenticate workloads using KSAs, today, the API server must serve every authentication request, which will overload the API server. To solve this problem, Provide OIDC discovery for service account token issuer was introduce in 1.18 and has been graduated to beta in this release and is targeted to be a stable version in 1.21.

Many users already use key management/protection systems, such as Key Management Systems (KMS), Trusted Platform Modules (TPM), or Hardware Security Modules (HSM). Others might use authentication providers based on short-lived tokens.To support this, External client-go credential providers was introduce in 1.10 and now has been graduated to be a stable in this release

Kubernetes already provisions JWTs to workloads. This functionality is on by default and thus widely deployed. The current workload JWT system has serious issues: security and scalability. TokenRequest API and Kubelet integration provides a mechanism for provisioning Kubernetes service account tokens that is compatible with our current security and scalability requirements.

Conclusion

Kubernetes continues to grow as an industry choice for container orchestration engine (Cloud Native Survey 2020). Every release continues to add enhancements for compute, storage, network, and security. Looking forward to what's next in Kubernetes 1.21.


About the author

UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech