Windows Machine Config Operator (WMCO) 9.0.0 will ship CSI Proxy as part of the payload, allowing users to dynamically provision their Windows node storage using the CSI driver respective to the cloud platform on their cluster. This change will allow users to move from the deprecated in-tree storage to Container Storage Interface (CSI). This article discusses CSI versus in-tree, CSI migration, and outlines how to enable CSI persistent storage for Windows workloads targeting the vSphere cloud platform in WMCO 9.0.0 and migrating from WMCO 8.0.1 in-tree to WMCO 9.0.0 CSI.
In-Tree Storage
Originally, cloud provider-specific functionalities in Kubernetes have been implemented natively, as in-tree modules. In-tree cloud providers are developed and released in the main Kubernetes repository and allow a user to deploy Kubernetes without having to install additional components. In the case of storage, a user could immediately begin provisioning volumes by setting the appropriate StorageClass that matched their storage infrastructure. In-tree was the recommended approach for storage on Openshift Windows Containers until it was deprecated in Kubernetes 1.24.
CSI Migration
Unfortunately, the in-tree module did not scale easily. The in-tree method required every cloud provider to align their plugin code with the Kubernetes release process. Kubernetes maintainers are expected to test and maintain every cloud provider’s storage plugin. Container Storage Interface (CSI) was designed to solve this problem. CSI is the out-of-tree implementation model that allows cloud providers to write and deploy storage plugins according to their release lifecycles and without altering the Kubernetes codebase. On Linux, a user runs a CSI driver plugin directly on their host and then defines their StorageClass with the external provisioner pointing to the CSI plugin. More CSI drivers have become production-ready, but to not break API compatibility with existing storage API types, the SIG storage group came up with CSI migration. CSI migration is in place to slowly translate in-tree APIs to their equivalent CSI APIs and have operations replaced by their corresponding CSI driver. The table below [Table 1.] reflects plugins that affect OpenShift Windows containers and whether in-tree or CSI should be used based on the WMCO version.
| WMCO | v5 | v6 | v7 | v8 | v9 |
| ---------- | ------- | ------- | ------- | -------- | ------ |
| OCP | 4.10 | 4.11 | 4.12 | 4.13 | 4.14 |
| Driver | | | | | |
| AWS EBS | in-tree | in-tree | CSI GA | CSI GA | CSI GA |
| Azure Disk | in-tree | in-tree | CSI GA | CSI GA | CSI GA |
| Azure File | in-tree | in-tree | in-tree | CSI GA | CSI GA |
| GCE PD | in-tree | in-tree | CSI GA | CSI GA | CSI GA |
| vSphere | in-tree | in-tree | in-tree | CSI GA\* | CSI GA |
Table 1. Recommended storage usage (in-tree or CSI) for Windows users on OCP across supported providers given WMCO and OCP versions.
*Migration is enabled for newly installed clusters and disabled for upgraded clusters. vSphere platform users must choose to opt-in when upgrading from 4.12, or earlier, to 4.13.
CSI on Windows
CSI node plugins require privileges to perform storage-related actions, but only Windows HostProcess containers grant the required privileges. To get around this, CSI Proxy is used. CSI Proxy is a binary that runs on the Windows host and exposes a set of gRPC APIs around local storage operations for nodes in Windows. The CSI Proxy binary runs on the Windows host and mounts named pipes to invoke the APIs. A CSI plugin is then deployed as an unprivileged pod through a node DaemonSet also running on the Windows host. Next, a user defines their StorageClass and sets the provisioner to the external CSI plugin, and the user can dynamically provision their storage on Windows. The next section of the article gives a step-by-step guide on how to use CSI Proxy and the vSphere CSI driver to dynamically provision Windows node storage with the vSphere cloud platform starting from scratch in WMCO 9.0.0. The last section outlines the CSI migration procedure from WMCO 8.0.1 to WMCO 9.0.0.
Install Procedure
Note: A command preceded by > is to be run in a PowerShell window on a Windows instance, and a command preceded by $ is to be run on a Linux console.
Prerequisites
OCP/OKD 4.14 or later cluster installed with vSphere as the cloud provider
WMCO 9.0.0 or later installed
At least one Windows Server 2022 worker node
Steps
Install vSphere Container Storage Plug-in for Windows
$ oc apply -f https://raw.githubusercontent.com/openshift/windows-machine-config-operator/master/hack/manifests/csi/vsphere/01-example-driver-daemonset.yaml
Create the windows-storage-example namespace for your storage resources
$ oc create -f https://raw.githubusercontent.com/openshift/windows-machine-config-operator/master/hack/manifests/csi/vsphere/02-example-namespace.yaml
Deploy a Storage Class with a CSI provisioner
$ oc apply -f https://raw.githubusercontent.com/openshift/windows-machine-config-operator/master/hack/manifests/csi/vsphere/03-example-sc.yaml
Deploy a PVC
$ oc apply -f https://raw.githubusercontent.com/openshift/windows-machine-config-operator/master/hack/manifests/csi/vsphere/04-example-pvc.yaml
Deploy a Windows workload
$ oc apply -f https://raw.githubusercontent.com/openshift/windows-machine-config-operator/master/hack/manifests/csi/vsphere/05-example-pod.yaml
Confirm workload data exists
$ oc project windows-storage-example
$ oc exec -it <example-windows-pod-name> cmd
> type C:\\test\\csi\\timestamp.txt
Upgrade Procedure
Users migrating their storage from OCP 4.13 in-tree to OCP 4.14 CSI must go through the steps below for a seamless transition.
Prerequisites
OCP/OKD 4.13 cluster installed with vSphere as the cloud provider
WMCO 8.0.1 installed
At least one Windows Server 2022 worker node
Windows pod with a fully configured in-tree storage mount attached
Steps
Upgrade your cluster following either option:
Updating a cluster using the web console
Updating a cluster using the CLI
Install vSphere Container Storage Plug-in for Windows through a DaemonSet
$ oc apply -f https://raw.githubusercontent.com/openshift/windows-machine-config-operator/master/hack/manifests/csi/vsphere/01-example-driver-daemonset.yaml
Verify migrated PVs/PVCs are provisioned via csi.vsphere.vmware.com
Check that the PVC is referencing the CSI provisioner: csi.vsphere.vmware.com:
$ oc describe pvc <name_of_Windows_CSI_pvc> --namespace=<Windows_storage_resources_namespace>
Check that the PV is referencing the CSI provisioner: csi.vsphere.vmware.com
$ oc describe pv <name_of_Windows_CSI_pv> --namespace=<Windows_storage_resources_namespace>
If the PVC/PV is still referencing in-tree, delete the PVC of the pod you are reprovisioning, then delete the pod to proceed. The pod and PVC will be reprovisioned with the CSI provisioner. Confirm by rerunning the two above commands.
Confirm Windows deployment container data persists
$ oc project <Windows_storage_resources_namespace>
$ oc exec -it <example-windows-deployment-name> cmd
> cd or type C:\\path\\to\\container_data
关于作者
产品
工具
试用购买与出售
沟通
关于红帽
我们是世界领先的企业开源解决方案供应商,提供包括 Linux、云、容器和 Kubernetes。我们致力于提供经过安全强化的解决方案,从核心数据中心到网络边缘,让企业能够更轻松地跨平台和环境运营。