A previous post introduced the Secrets Store CSI Driver in Red Hat OpenShift. You can refer to it to learn the basics behind this driver. This post demonstrates how to integrate the OpenShift Secrets Store CSI Driver with an external secrets management system like Vault.
This article uses a Vault server running outside the OpenShift cluster. If you run the Vault server inside an OpenShift cluster, the procedure is slightly different and is not covered in this post.
IMPORTANT: As of Red Hat OpenShift 4.14, the Secrets Store CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product capabilities, enabling customers to test functionality and provide feedback during the development process.
Prerequisites
- An OpenShift v4.14 cluster
- OpenShift Secrets Store CSI Driver Operator deployed and a default ClusterCSIDriver created
- A Vault server deployed outside the OpenShift Cluster
Configure the Vault CSI provider
For the Secrets Store CSI driver to gather secrets information from the Vault server, you must first deploy the Vault CSI Provider.
IMPORTANT: The Vault CSI provider for the Secrets Store CSI driver is an upstream provider. Currently, this provider is outside the Tech Preview program. We plan to get this provider certified for the GA release. The Vault CSI provider requires running its pods as privileged. Grant access to the privileged SCC to the ServiceAccount used by the Vault CSI pods:
oc -n openshift-cluster-csi-drivers adm policy add-scc-to-user privileged -z vault-csi-providerNext, deploy the Vault CSI provider:
NOTE: This configuration is modified from the configuration provided in the upstream repository to work properly with OpenShift. Changes to this configuration might impact functionality.
cat <<EOF | oc apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault-csi-provider
namespace: openshift-cluster-csi-drivers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: vault-csi-provider-clusterrole
rules:
- apiGroups:
- ""
resources:
- serviceaccounts/token
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-csi-provider-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vault-csi-provider-clusterrole
subjects:
- kind: ServiceAccount
name: vault-csi-provider
namespace: openshift-cluster-csi-drivers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: vault-csi-provider-role
namespace: openshift-cluster-csi-drivers
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
resourceNames:
- vault-csi-provider-hmac-key
# 'create' permissions cannot be restricted by resource name:
# https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: vault-csi-provider-rolebinding
namespace: openshift-cluster-csi-drivers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: vault-csi-provider-role
subjects:
- kind: ServiceAccount
name: vault-csi-provider
namespace: openshift-cluster-csi-drivers
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app.kubernetes.io/name: vault-csi-provider
name: vault-csi-provider
namespace: openshift-cluster-csi-drivers
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: vault-csi-provider
template:
metadata:
labels:
app.kubernetes.io/name: vault-csi-provider
spec:
serviceAccountName: vault-csi-provider
tolerations:
containers:
- name: provider-vault-installer
image: docker.io/hashicorp/vault-csi-provider:1.4.1
securityContext:
privileged: true
imagePullPolicy: Always
args:
- -endpoint=/provider/vault.sock
- -debug=false
resources:
requests:
cpu: 50m
memory: 100Mi
limits:
cpu: 50m
memory: 100Mi
volumeMounts:
- name: providervol
mountPath: "/provider"
livenessProbe:
httpGet:
path: "/health/ready"
port: 8080
scheme: "HTTP"
failureThreshold: 2
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
readinessProbe:
httpGet:
path: "/health/ready"
port: 8080
scheme: "HTTP"
failureThreshold: 2
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
volumes:
- name: providervol
hostPath:
path: "/etc/kubernetes/secrets-store-csi-providers"
nodeSelector:
kubernetes.io/os: linux
EOFThe Vault CSI pods are now running:
oc -n openshift-cluster-csi-drivers get pods
NAME READY STATUS RESTARTS AGE
secrets-store-csi-driver-node-46lpg 3/3 Running 0 12h
secrets-store-csi-driver-node-4svsk 3/3 Running 0 12h
secrets-store-csi-driver-node-j4ljq 3/3 Running 0 12h
secrets-store-csi-driver-operator-7c5fb75769-g6x76 1/1 Running 0 12h
vault-csi-provider-26pdt 1/1 Running 0 8h
vault-csi-provider-68nhp 1/1 Running 0 8h
vault-csi-provider-kg52z 1/1 Running 0 8hCreate secrets in Vault
This section assumes that you have access to your Vault server and you're authenticated with the Vault CLI. Create a secret for the application to consume.
vault kv put -mount=kv team1/db-pass password="mys3cretdbp4ss"In your environment, you may need to change the path to the secret. Verify that the secret is readable:
vault kv get -mount=kv team1/db-pass
==== Secret Path ====
kv/data/team1/db-pass
======= Metadata =======
Key Value
--- -----
created_time 2023-11-15T08:34:51.014161533Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
====== Data ======
Key Value
--- -----
password mys3cretdbp4ssConnect the CSI provider to Vault
Provide the required configurations so the Vault CSI provider and the Vault server can talk to each other. This example uses a long-lived ServiceAccount token. You may want to use the JWT OIDC provider for Kubernetes for future production use. If you run the Vault server in the same OpenShift cluster as the Vault CSI provider, you can use the local service account token auth method instead. Create the required configurations in Kubernetes to integrate with Vault:
cat <<EOF | oc apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault
namespace: openshift-cluster-csi-drivers
---
apiVersion: v1
kind: Secret
metadata:
name: vault-k8s-auth-secret
namespace: openshift-cluster-csi-drivers
annotations:
kubernetes.io/service-account.name: vault
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-sa-tokenreview-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault
namespace: openshift-cluster-csi-drivers
EOFGet the required information from the Kubernetes cluster:
KUBERNETES_API=$(oc whoami --show-server)
VAULT_SA_JWT=$(oc -n openshift-cluster-csi-drivers get secret vault-k8s-auth-secret -o jsonpath='{.data.token}' | base64 -d)
KUBERNETES_API_IP_PORT=$(echo $KUBERNETES_API | awk -F "//" '{print $2}')
KUBERNETES_API_CA=$(openssl s_client -connect $KUBERNETES_API_IP_PORT </dev/null 2>/dev/null | openssl x509 -outform PEM)Configure the Kubernetes authentication in Vault:
vault auth enable kubernetes
vault write auth/kubernetes/config kubernetes_host="$KUBERNETES_API" token_reviewer_jwt="$VAULT_SA_JWT" kubernetes_ca_cert="$KUBERNETES_API_CA"Create a Vault policy and add a user to the Kubernetes auth so the app that you will deploy later can read the secret created earlier. Use db-app-sa ServiceAccountName and db-app Namespace for the user. In your environment, you may need to change the path to the secret.
Create the Policy:
vault policy write database-app - <<EOF
path "kv/data/team1/db-pass" {
capabilities = ["read"]
}
EOFCreate the user:
vault write auth/kubernetes/role/database bound_service_account_names=db-app-sa bound_service_account_namespaces=db-app policies=database-app ttl=20mConsume secrets from Vault in the workloads
Now that you've configured the CSI provider, you can see how to consume secrets from Vault in the workloads. First, create a namespace for the application:
oc create namespace db-appNext, you must define a SecretProviderClass for the Vault store. Update the vaultAddress to match your environment. You are not validating TLS certs; you can use the different parameters to specify a CA so TLS verification is not skipped.
cat <<EOF | oc apply -f -
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-database
namespace: db-app
spec:
provider: vault
parameters:
vaultAddress: "https://192.168.122.20:8201"
vaultSkipTLSVerify: "true"
roleName: "database"
objects: |
- objectName: "db-password"
secretPath: "kv/data/team1/db-pass"
secretKey: "password"
EOFCreate the application consuming the secret:
cat <<EOF | oc apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: db-app-sa
namespace: db-app
---
kind: Pod
apiVersion: v1
metadata:
name: dbapp
namespace: db-app
spec:
serviceAccountName: db-app-sa
containers:
- image: quay.io/mavazque/trbsht:latest
name: dbapp
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "vault-database"
EOFAccess the pod and view the secret:
oc -n db-app exec -ti dbapp -- cat /mnt/secrets-store/db-password
mys3cretdbp4ssIn addition to mounting secrets in the container filesystem, you can sync vault data into a Kubernetes Secret so the pod can consume it that way.
IMPORTANT: If you plan to consume your secret data as Kubernetes Secrets only, then other solutions like External Secrets Operator may be a better fit. More on this topic in the closing thoughts section.
Update the SecretProviderClass to include the secretObjects entry. You can find a list of supported secret types here.
cat <<EOF | oc apply -f -
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-database
namespace: db-app
spec:
provider: vault
secretObjects:
- data:
- key: password
objectName: db-password
secretName: db-pass
type: Opaque
parameters:
vaultAddress: "https://192.168.122.20:8201"
vaultSkipTLSVerify: "true"
roleName: "database"
objects: |
- objectName: "db-password"
secretPath: "kv/data/team1/db-pass"
secretKey: "password"
EOFIf you try to get the secret, you'll see that it doesn't exist yet:
oc -n db-app get secret db-pass
Error from server (NotFound): secrets "db-pass" not foundIf you create a pod requesting such secrets using the SecretProviderClass, you see that the secret gets created. When a pod references this SecretProviderClass, the CSI driver creates a Kubernetes Secret called db-pass with the password field set to the contents of the db-password object from the parameters. In this case, the pod waits for the secret to be created before starting, and the secret is deleted when all pods using this SecretProviderClass are stopped.
cat <<EOF | oc apply -f -
kind: Pod
apiVersion: v1
metadata:
name: dbapp-secret
namespace: db-app
spec:
serviceAccountName: db-app-sa
containers:
- image: quay.io/mavazque/trbsht:latest
name: dbapp
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-pass
key: password
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "vault-database"
EOFGet the secret:
oc -n db-app get secret db-pass
NAME TYPE DATA AGE
db-pass Opaque 1 22sCheck the environment variable:
oc -n db-app exec -ti dbapp-secret -- sh -c 'echo $DB_PASSWORD'
mys3cretdbp4ssIf you delete every pod using the SecretProviderClass, the secret is also gone:
oc -n db-app delete pod dbapp-secret db-app
oc -n db-app get secret db-pass
Error from server (NotFound): secrets "db-pass" not foundCSI and OpenShift
This post introduced the Vault CSI Provider for the OpenShift Secrets Store CSI Driver. You connected an OpenShift cluster to an external Vault server and consumed secret data from Vault within the workloads. The Secret Store CSI Driver is a good alternative to other solutions like the External Secrets Operator or Sealed Secrets when you need to avoid secrets being stored in the etcd, like when running in a managed service and the control plane is outside your control. In addition, secrets are auto-rotated in the pods without any other tooling required.
While the Secrets Store CSI Driver can also create Kubernetes Secrets from the secret data from the secret management systems, solutions like External Secrets Operator may be a better fit for that specific use case. If you want to know more about what options exist today to protect your secret data on and off your OpenShift cluster, read A Holistic approach to encrypting secrets, both on and off your OpenShift clusters. Stay tuned for future improvements in the community projects and for the GA release in OpenShift.
執筆者紹介
類似検索
Implementing best practices: Controlled network environment for Ray clusters in Red Hat OpenShift AI 3.0
Friday Five — December 12, 2025 | Red Hat
Technically Speaking | Platform engineering for AI agents
Technically Speaking | Driving healthcare discoveries with AI
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
仮想化
オンプレミスまたは複数クラウドでのワークロードに対応するエンタープライズ仮想化の将来についてご覧ください