Confidential containers (CoCo) is a new feature of Red Hat OpenShift sandboxed containers that leverages Trusted Execution Environment (TEE) technology to isolate your containers from the host and other containers. In this blog post, you will learn how to set up OpenShift sandboxed containers with confidential containers support on an OpenShift cluster hosted on Azure, using AMD SEV-SNP technology.
You will also see how to create and run a confidential container that can process confidential data more securely and efficiently.
For more information on confidential containers running on Azure using OpenShift sandboxed containers and its building blocks, please refer to the previous blog in this series Confidential Containers on Azure with OpenShift: A technical deep dive.
Create OpenShift cluster in Azure
Note: You'll need to set up Azure CLI (az) and create required roles as mentioned in the above document.
Ensure you create the OpenShift cluster in a region where AMD SEV-SNP confidential VMs are available. A good default is "eastus". For further details on AMD confidential VM availability, please refer to the following Azure documentation.
Deploy OpenShift sandboxed containers operator and enable confidential containers support
Prerequisites:
- Have the service principal AZURE_CLIENT_ID, AZURE_TENANT_ID & AZURE_CLIENT_SECRET values used for your cluster creation
Deploy operator
Deploy OpenShift sandboxed containers Operator using the web console or using the CLI. Do not create the KataConfig custom resource.
Collect and set peer-pods configuration parameters for Azure
Create peer-pods-secret Secret YAML definition
cat > coco-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: peer-pods-secret
namespace: openshift-sandboxed-containers-operator
type: Opaque
stringData:
AZURE_CLIENT_ID: "${AZURE_CLIENT_ID}" # set
AZURE_CLIENT_SECRET: "${AZURE_CLIENT_SECRET}" # set
AZURE_TENANT_ID: "${AZURE_TENANT_ID}" # set
EOF
cat coco-secret.yaml
Create the secret (after validating all fields are populated):
oc apply -f coco-secret.yaml
Run the peer-pods ConfigMap defaulter to collect and set the configuration values for the peer-pods-cm ConfigMap:
oc apply -f https://raw.githubusercontent.com/openshift/sandboxed-containers-operator/coco-dev-preview/hack/coco-cm-defaulter.yaml
Wait for peer-pods-cm ConfigMap to be created and validate its values:
oc get cm/peer-pods-cm -n openshift-sandboxed-containers-operator -o yaml
Generate SSH keys and create a secret:
ssh-keygen -f ./id_rsa -N ""
oc create secret generic ssh-key-secret -n openshift-sandboxed-containers-operator --from-file=id_rsa.pub=./id_rsa.pub --from-file=id_rsa=./id_rsa
Create peer-pods CVM image based on Azure’s RHEL 9.3 confidential image
Run the Kubernetes Job for creation the peer-pods confidential image, wait for its completion (up to 20 minutes) and get the image ID:
oc apply -f https://raw.githubusercontent.com/openshift/sandboxed-containers-operator/coco-dev-preview/hack/azure-CVM-image-create-job.yaml
oc wait job.batch/azure-confidential-image-creation -n openshift-sandboxed-containers-operator --for=condition=complete --timeout=20m
export IMG=$(oc logs job.batch/azure-confidential-image-creation -c result -n openshift-sandboxed-containers-operator) && echo $IMG
Set image ID to the peer-pods-cm ConfigMap:
oc get cm/peer-pods-cm -n openshift-sandboxed-containers-operator -o json | jq --arg IMG "$IMG" '.data.AZURE_IMAGE_ID = $IMG' | oc replace -f -
Create KataConfig CR
Deploy KataConfig through operator hub or apply the following configuration:
cat > kataconfig.yaml <<EOF
apiVersion: kataconfiguration.openshift.io/v1
kind: KataConfig
metadata:
name: example-kataconfig
spec:
enablePeerPods: true
# kataConfigPoolSelector:
# matchLabels:
# coco: true
EOF
cat kataconfig.yaml
oc apply -f kataconfig.yaml
Wait for kata-oc MachineConfigPool (MCP) to be in UPDATED state (once UPDATEDMACHINECOUNT equals MACHINECOUNT):
watch oc get mcp/kata-oc
Adapt shim binary in worker nodes:
oc apply -f https://raw.githubusercontent.com/openshift/sandboxed-containers-operator/coco-dev-preview/hack/ds.yaml
Create the new MachineConfig to update the Kata-CoCo configurations:
oc apply -f https://raw.githubusercontent.com/openshift/sandboxed-containers-operator/coco-dev-preview/hack/mc-coco.yaml
Wait for nodes to be in READY state:
oc get mcp kata-oc --watch
Update cloud-api-adapter images to support CoCo:
oc set image ds/peerpodconfig-ctrl-caa-daemon -n openshift-sandboxed-containers-operator cc-runtime-install-pod=quay.io/openshift_sandboxed_containers/cloud-api-adaptor:coco-dev-preview
Running a sample workload
Here we’ll demonstrate how to run a simple application as a confidential container using Azure’s confidential VM.
Create a hello-openshift.yaml file with the following contents:
cat > hello-openshift.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: hello-openshift
labels:
app: hello-openshift
spec:
runtimeClassName: kata-remote
containers:
- name: hello-openshift
image: quay.io/openshift/origin-hello-openshift
ports:
- containerPort: 8888
securityContext:
privileged: false
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1001
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
---
kind: Service
apiVersion: v1
metadata:
name: hello-openshift-service
labels:
app: hello-openshift
spec:
selector:
app: hello-openshift
ports:
- port: 8888
EOF
Deploy by running the following command:
oc apply -f hello-openshift.yaml
oc get pod/hello-openshift
Create an OpenShift route by running the following command:
oc expose service hello-openshift-service -l app=hello-openshift
APP_URL=$(oc get routes/hello-openshift-service -o jsonpath='{.spec.host}')
Once all pods are running, check if the application is responding with Hello OpenShift!
message:
curl ${APP_URL}
Destroy the sample workload:
oc delete all -l app=hello-openshift
Retrieving workload secrets via remote attestation
The following section describes a scenario where the application retrieves secrets from the Key Broker Service (KBS) after remote attestation. The application uses the Kubernetes initContainer pattern to initiate remote attestation and key retrieval from the KBS.
The following sections describe a demo scenario where secrets are retrieved from the KBS by the application. For ease of use, we are deploying the KBS on the same OpenShift cluster.
Deploy Key Broker Service
KBS is a remote attestation entry point that integrates the Attestation Service to verify Trusted Execution Environment (TEE) evidence. Details on remote attestation and KBS can be found in the technical deep-dive blog.
Create and configure the namespace for KBS deployment:
oc new-project coco-kbs
# Allow anyuid in kbs pod
oc adm policy add-scc-to-user anyuid -z default -n coco-kbs
Create the KBS ConfigMap:
oc apply -f https://raw.githubusercontent.com/openshift/sandboxed-containers-operator/coco-dev-preview/hack/kbs/kbs-cm.yaml
Create some secrets that will be sent to the application by the KBS:
openssl genpkey -algorithm ed25519 >kbs.key
openssl pkey -in kbs.key -pubout -out kbs.pem
# Create an application secret
head -c 32 /dev/urandom | openssl enc > key.bin
# Create a secret object from the kbs.pem file.
oc create secret generic kbs-auth-public-key --from-file=kbs.pem -n coco-kbs
# Create a secret object from the user key file (key.bin).
oc create secret generic kbs-keys --from-file=key.bin
Create the local KBS deployment and wait for it to be ready:
oc apply -f https://raw.githubusercontent.com/openshift/sandboxed-containers-operator/coco-dev-preview/hack/kbs/kbs-deploy.yaml
# wait for completion
oc wait --for=condition=available deployment/kbs -n coco-kbs
Setup KBS in cloud-api-adaptor:
# Get KBS route
KBS_RT=$(oc get routes -n coco-kbs -ojsonpath='{range .items[*]}{.spec.host}{"\n"}{end}') && echo ${KBS_RT}
export AA_KBC_PARAMS=cc_kbc::http://${KBS_RT}
# Update peer-pods-cm ConfigMap with KBS route
oc get cm/peer-pods-cm -n openshift-sandboxed-containers-operator -o json | jq --arg AA_KBC_PARAMS "$AA_KBC_PARAMS" '.data.AA_KBC_PARAMS = $AA_KBC_PARAMS' | kubectl replace -f -
# Restart cloud-api-adaptor to be updated with the CM change
kubectl set env ds/peerpodconfig-ctrl-caa-daemon -n openshift-sandboxed-containers-operator REBOOT="$(date)"
Deploy sample workload
Create YAML definition of the sample workload:
cat > workload.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: simple-key-release-demo
labels:
app.kubernetes.io/name: simple-key-release-demo
spec:
runtimeClassName: kata-remote
containers:
- name: simple-key-release-demo
image: quay.io/openshift_sandboxed_containers/simple-key-release-demo:1.0.0
imagePullPolicy: Always
securityContext:
privileged: false
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1001
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
env:
- name: ENCRYPTED_FILE_URL
value: ""
- name: KEY_FILE_PATH
value: "/data/key.bin"
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
emptyDir: {}
initContainers:
- name: attestor
image: registry.access.redhat.com/ubi9/ubi:9.3
imagePullPolicy: Always
command: ["/bin/sh"]
args: ["-c", "sleep 60 && curl \${API_ENDPOINT}\${KBS_RESOURCE_ID} -o \${KEY_FILE_PATH}"]
env:
- name: API_ENDPOINT
value: "http://127.0.0.1:8006/cdh/resource"
- name: KBS_RESOURCE_ID
value: "/mysecret/workload_key/key.bin"
- name: KEY_FILE_PATH
value: "/data/key.bin"
volumeMounts:
- name: data
mountPath: /data
securityContext:
privileged: false
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1001
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
EOF
The application uses the KBS_RESOURCE_ID variable with value /mysecret/workload_key/key.bin
to retrieve the secret from the KBS.
Deploy workload.yaml:
oc project default
oc apply -f workload.yaml
This should deploy the workload and you can see the application attestor container has completed successfully, i.e. secret is retrieved and ready to be consumed by the application.
Destroy the sample workload:
oc delete -f workload.yaml
Cleanup
Run Kubernetes Job for removing the created peer-pods CVM image resources and wait for its completion:
oc apply -f https://raw.githubusercontent.com/openshift/sandboxed-containers-operator/coco-dev-preview/hack/azure-CVM-image-delete-job.yaml
# wait for completion
oc wait job.batch/azure-confidential-image-deletion -n openshift-sandboxed-containers-operator --for=condition=complete --timeout=20m
Destroy the OpenShift cluster:
openshift-install --dir=<path-to-install-artifacts> destroy cluster
Note: Ensure that the resource group and all its contents are deleted by verifying from the Azure console to avoid accidental costs.
Conclusion
In this blog post we looked at confidential containers functionality that is made available via Red Hat OpenShift sandboxed containers. We learned how to create and upload a confidential VM image for the pod. We also demonstrated deploying a simple workload running as a confidential container backed by Azure Confidential VM and how applications retrieved secrets using the remote attestation procedure.
For a more in-depth understanding of confidential containers running on Azure and the core principles behind them, we recommend referring to our previous blog post in this series: Confidential Containers on Azure with OpenShift: A technical deep dive.
Please note that confidential containers is currently available in a dev-preview mode, and we encourage you to keep experimenting, exploring and sharing your feedback with us.
Sugli autori
Pradipta is working in the area of confidential containers to enhance the privacy and security of container workloads running in the public cloud. He is one of the project maintainers of the CNCF confidential containers project.
Suraj Deshmukh is working on the Confidential Containers open source project for Microsoft. He has been working with Kubernetes since version 1.2. He is currently focused on integrating Kubernetes and Confidential Containers on Azure.
Jens Freimann is a Software Engineering Manager at Red Hat with a focus on OpenShift sandboxed containers and Confidential Containers. He has been with Red Hat for more than six years, during which he has made contributions to low-level virtualization features in QEMU, KVM and virtio(-net). Freimann is passionate about Confidential Computing and has a keen interest in helping organizations implement the technology. Freimann has over 15 years of experience in the tech industry and has held various technical roles throughout his career.
Magnus has received an academic education in Humanities and Computer Science. He has been working in the software industry for around 15 years. Starting out in the world of proprietary Unix he quickly learned to appreciate open source and has applied it everywhere since. He was engaged mostly in the niches of automation, virtualization and cloud computing. During his career in various industries (mobility, sustainability, tech) he exercised various contributor and leadership roles. Currently he's employed as a Software Engineer at Microsoft, working in the Azure Core organization.
Altri risultati simili a questo
Ricerca per canale
Automazione
Novità sull'automazione IT di tecnologie, team e ambienti
Intelligenza artificiale
Aggiornamenti sulle piattaforme che consentono alle aziende di eseguire carichi di lavoro IA ovunque
Hybrid cloud open source
Scopri come affrontare il futuro in modo più agile grazie al cloud ibrido
Sicurezza
Le ultime novità sulle nostre soluzioni per ridurre i rischi nelle tecnologie e negli ambienti
Edge computing
Aggiornamenti sulle piattaforme che semplificano l'operatività edge
Infrastruttura
Le ultime novità sulla piattaforma Linux aziendale leader a livello mondiale
Applicazioni
Approfondimenti sulle nostre soluzioni alle sfide applicative più difficili
Serie originali
Raccontiamo le interessanti storie di leader e creatori di tecnologie pensate per le aziende
Prodotti
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Servizi cloud
- Scopri tutti i prodotti
Strumenti
- Formazione e certificazioni
- Il mio account
- Supporto clienti
- Risorse per sviluppatori
- Trova un partner
- Red Hat Ecosystem Catalog
- Calcola il valore delle soluzioni Red Hat
- Documentazione
Prova, acquista, vendi
Comunica
- Contatta l'ufficio vendite
- Contatta l'assistenza clienti
- Contatta un esperto della formazione
- Social media
Informazioni su Red Hat
Red Hat è leader mondiale nella fornitura di soluzioni open source per le aziende, tra cui Linux, Kubernetes, container e soluzioni cloud. Le nostre soluzioni open source, rese sicure per un uso aziendale, consentono di operare su più piattaforme e ambienti, dal datacenter centrale all'edge della rete.
Seleziona la tua lingua
Red Hat legal and privacy links
- Informazioni su Red Hat
- Opportunità di lavoro
- Eventi
- Sedi
- Contattaci
- Blog di Red Hat
- Diversità, equità e inclusione
- Cool Stuff Store
- Red Hat Summit