Congratulations to the Kubernetes community on reaching Kubernetes 1.19. The community, a truly global one, has come together in these difficult times to produce yet another solid release.  1.19 continues the steady march towards greater stability for the production use of Kubernetes, while adding important incremental features in support of new use cases and workloads while improving the ease of use for existing ones. 

We at Red Hat and the OpenShift team continue to participate in this great modern experiment. We certainly love community-led technical and engineering collaboration done the open source way, and within the CNCF, the Kubernetes community produces new proposals from alpha to beta to stable, code changes and pull requests, all with a great process of review and feedback. 

Changes Introduced to Kubernetes API Level led by sig-api 

One of the most consequential proposals that is going stable in Kube 1.19 is the ability to automatically track and act on the transitions from Beta to Stable. The goal is to prevent APIs from staying in beta for an extended period of time. Tags indicating when an API was introduced in beta APIs are now required. An API that stays in Beta is deprecated three releases later and then removed three releases after that.

While many Kuberentes APIs have .status.conditions, the schema of condition varies a lot between them. Standardize Conditions feature for APIs allows consumers to expect a common schema for .status.conditions and share Golang logic for common Get, Set, Is for .status.conditions. The schema is going stable. Note that the goal is not go back and change all existing APIs, but rather make this available for new APIs going forward. 

Also in 1.19, the Warning mechanism for use of deprecated APIs goes Beta. Kubernetes introduces a Warning mechanism when using deprecated APIs. With this feature, admission webhooks can optionally return warning messages, making an API request to a deprecated REST API endpoint will result in a Warning the API response and will record that as an audit event and also update metrics. Also, with 1.19, CustomResourceDefinition can indicate a particular version of the resource it defines is deprecated, which results in a warning message in the API response. 

The ability to add AppProtocol to Services and Endpoints is graduating to beta. The lack of direct support for specifying application protocols for ports has led to widespread use of annotations. Annotations are cloud specific and have led to a poor user experience. Since application protocols are specific to each port specified on a service or endpoints resource, AppProtocol is being added.  

Security and Authentication Improvements Led by SIG-Auth

Kubelet Client TLS Certificate bootstrap and rotation (including automated rotation), that has been Beta for sometime, is finally going stable. This work also has led to a new CertificateSigningRequest API (going stable) that allows for PKI issuance to be consumed by both core Kubernetes components as well as user workloads running in the cluster.

The NodeRestriction admission controller that plugs a node security attack vector by limiting the Pod and Node objects that a Kubelet can modify is going stable.  

Kubernetes Scheduling Related Changes Led by SIG Scheduling 

There are a number of exciting enhancements to the Kubernetes scheduler. And this has been a journey that we have been on for the past few releases and will continue moving forward. 

The ability to customize the behavior of the Kube-scheduler by writing a configuration file and passing its path as a command line argument has graduated to beta. This combined with the beta of Scheduler Profiles, which allows the scheduler to run multiple scheduler profiles associated to a scheduler name, means Kubernetes is able to support a greater and wider set of workloads and use cases.

The Pod Topology Spread constraints feature, previously in beta and now stable with Kube 1.19, allows the Kubernetes scheduler to spread a group of pods across failure domains. Previously one had to use the inter-pod anti-affinity, which does not allow more than one pod to exist in a failure domain. The new feature supports more than one pod in a failure domain. The other notable feature is a new option for podSpecs that prevents the preemption of existing workloads, which can be especially when using certain types of long running batch workloads. 

Storage Related Enhancements led by SIG Storage

For better performance, scalability and stability SIG-storage is graduating the Immutable Secrets and ConfigMaps to beta. With the ability to mark Kubernetes Secrets and ConfigMaps as immutable, developers can prevent accidental changes from propagating while also not requiring that these changes be watched and thus enhancing the ability to scale. 

The Kubernetes implementation of the Container Storage Interface (CSI) has been GA in Kubernetes since the v1.13 release. It allows for a flexible way for an ecosystem of storage technologies to work with Kubernetes via a standard interface while allowing the technologies to innovate and make changes independently of Kubernetes releases. With Kube 1.19, you see some existing in-tree storage drivers, such as the Azure Disk and vSphere drivers being moved out of the tree to take advantage of this while doing so in a way that is not disruptive to users of Kubernetes. 

With the introduction of CSI Storage Capacity management (alpha), Kuberentes can now track the storage capacity as reported by the CSI Storage driver so that pod scheduling can take storage capacity availability into consideration when making pod placement decisions. 

Other Notable Improvements and Changes

Logging is such an essential part of debugging any complex system. It is encouraging to see that the community is working on a Structured Logging proposal. The proposal seeks to define a standard structure for Kubernetes log messages, add methods to log to enforce this structure, add ability to configure Kubernetes components to produce logs in JSON format, and initiate migration to Structured Logging.


The Kubernetes community continues to lead by being vibrant, strong and welcoming. We’re excited to work towards the future of Kubernetes and its expanding ecosystem of related software projects. We recently contributed the Operators Framework to the CNCF, and we couldn’t be happier with the folks being nominated for the 2020 Steering Committee elections. The CNCF and the numerous contributors and members of the various SIGs all deserve a hearty round of applause for their terrific and tireless work to advance the state of open hybrid cloud computing. When we all work together and agree on powerful technologies, like Linux containers and Kubernetes, it just makes IT systems better overall for everyone on Earth and in orbit. And that’s what open source software development is all about!