What are Kubernetes cloud providers?
Kubernetes is flexible. By building on top of its built-in capabilities, you can solve almost any use case. By extending it, you can tackle even the most complicated configurations.
One important way in which you can build on and extend Kubernetes is with cloud providers. Kubernetes cloud providers are extensions to Kubernetes that are responsible for managing the lifecycle of Nodes, along with (optionally) Load Balancers and Networking Routes, using the APIs of your chosen cloud or infrastructure provider. In the case of the OpenStack cloud provider, this means talking to Nova to inspect the health and attributes of a Node, to Octavia whenever we want a Load Balancer, and to Neutron whenever we want a Networking Route.
What are external cloud providers?
Historically, all cloud providers in Kubernetes were found in the main Kubernetes repository. However, Kubernetes aims to be ubiquitous and this means supporting a great many infrastructure providers. Doing this all from a single monolithic repository (and a single monolithic kube-controller-manager binary) was deemed something that wouldn’t scale, and so in 2017, the Kubernetes community began working on support for out-of-tree cloud providers. These out-of-tree providers were initially aimed at allowing the community to develop cloud providers for new, previously unsupported infrastructure providers, but as the functionality matured, the community decided to migrate all of the current in-tree cloud providers to external cloud providers too. You can read more about this in the Kubernetes blog and the Kubernetes documentation. The k8s.io/cloud-provider-openstack project is the end result of this effort for OpenStack clouds, while the k8s.io/cloud-provider-aws project exists for AWS integration, the k8s.io/cloud-provider-gcp exists for Google Cloud Platform integration, and so forth.
But why should I care?
If moving the cloud providers into their own separate package and binaries was the only change, there wouldn’t be much to see here. However, as the out-of-tree cloud providers were created for each infrastructure provider, the now legacy in-tree cloud providers were deprecated and development switched to the out-of-tree providers. This meant two things. Firstly, the in-tree providers had a limited lifespan and would eventually be removed, at which point all deployments would need to have migrated to the corresponding out-of-tree provider or find themselves unable to upgrade or get bugfixes for their existing deployments. Secondly, and perhaps more interestingly, the focus on the out-of-tree providers for all new development resulted in an ever-increasing feature gap between the legacy in-tree providers and the out-of-tree providers. We’re not just talking about one or two features either: there are big features here for the OpenStack provider, such as support for using application credentials to authenticate with OpenStack services, the ability to use UDP load-balancers, and support for load-balancer health monitors to enable the Service routing of external traffic to node-local. We want these features for our OpenShift users regardless of platform, and it was our job to figure out how to deliver this in as painless a manner as possible.
Introducing CCCMO
Fortunately, OpenShift 4.x makes extensive use of these neat things called Operators. They automate the creation, configuration, and management of instances of Kubernetes-native applications. While the concept of Operators exist outside of OpenShift, OpenShift’s Operators are special in that they are used not just for managing the lifecycle of applications that run on a cluster, but also the lifecycles of various OpenShift cluster components. In effect, you can think of OpenShift’s cluster Operators as the “real” OpenShift installer while openshift-installer is a tool that sets up the scaffolding required for the various Operators to do their thing. Find out more about Operators in the OpenShift documentation. The cloud-controller-manager-operator (CCCMO) is one such cluster Operator, and it is responsible for managing the lifecycle of the various external cloud providers. More specifically, the Cloud Controller Manager binary provided by each cloud provider. OpenShift introduced preliminary support for CCCMO when running on OpenStack clouds in the 4.9 release and it is enabled by default starting in the 4.12 release.
How CCCMO does its magic
For CCM to work as expected when enabled in an upgrade from OpenShift 4.11 to 4.12, configurations need to be transformed from the user-editable config map named cloud-provider-config in the openshift-config namespace to a new, non-user editable config map named cloud-conf in the openshift-cloud-controller-manager namespace. Starting from 4.11, CCCMO handles that migration by using cloud-specific transformers that removes options that are no longer relevant to CCM and adds new options to the new config map. Here is a list of configuration options modified:
- [Global] secret-name, [Global] secret-namespace, and [Global] kubeconfig-path are dropped, since this information is contained in the clouds.yaml
- [Global] use-clouds, [Global] clouds-file, and [Global] cloud are added, ensuring the clouds.yaml file is used for sourcing OpenStack credentials.
- The entire [BlockStorage] section is removed, as all storage related actions are now handled by the CSI drivers.
- The [LoadBalancer] use-octavia option is always set to True and, while the [LoadBalancer] enabled option is set to False if Kuryr is present.
The order of operations is also critical during an upgrade while core functionality of the cluster is moved between components. CCCMO ensures that components that rely on cloud provider are upgraded in the correct order, and that the upgrade does not proceed until the previous steps have been completed and verified.
How do I enable the External Cloud Provider on my cluster?
It’s simple: just upgrade! If you have a 4.12 OpenShift cluster on OpenStack, nothing extra is required to enable the external OpenStack cloud provider. If you want to inspect a cluster and ensure the external cloud provider has been enabled, we would recommend checking the following:
- Existence of openstack-cloud-controller-manager pods
$ oc get pods -n openshift-cloud-controller-manager
NAME READY STATUS
openstack-cloud-controller-manager-769dc7b785-mgppt 1/1 Running
openstack-cloud-controller-manager-769dc7b785-n7nsj 1/1 Running
- The kube-controller-manager will no longer own the cloud controller. Instead, this will be managed by the external OpenStack cloud provider. You can verify this by inspecting the config config map in the openshift-kube-controller-manager namespace. This should have a cloud-provider entry that is set to the value external.
$ oc get configmap config -n openshift-kube-controller-manager -o yaml |grep cloud-provider
- The MachineConfig for all masters and workers should contain a cloud-provider value set as external for kubelet.
$ oc get MachineConfig 01-master-kubelet -o yaml | grep external
--cloud-provider=external \
$ oc get MachineConfig 01-worker-kubelet -o yaml | grep external
--cloud-provider=external \
Conclusion
Now that OpenStack has made the leap, you can expect the other infrastructure providers that OpenShift supports to follow suit. The problems that these platforms will face, such as migration of different configuration schemas, handling of disparate feature sets, and inspection of behavior during upgrade, will likely be similar to those faced by OpenStack. So, too, will the solutions.
However, this switch hasn’t just paved the way for other external cloud providers to make their own switch. It has also ensured we are now using a cloud provider that provides more robust and extensive management of cloud resources, such as nodes and load balancers.
关于作者
产品
工具
试用购买与出售
沟通
关于红帽
我们是世界领先的企业开源解决方案供应商,提供包括 Linux、云、容器和 Kubernetes。我们致力于提供经过安全强化的解决方案,从核心数据中心到网络边缘,让企业能够更轻松地跨平台和环境运营。