Blog Red Hat
Blog menu
Confidential containers (CoCo) is a new feature of Red Hat OpenShift sandboxed containers that leverages Trusted Execution Environment (TEE) technology to isolate your containers from the host and other containers. In this blog post, you will learn how to set up OpenShift sandboxed containers with confidential containers support on an OpenShift cluster hosted on Azure, using AMD SEV-SNP technology.
You will also see how to create and run a confidential container that can process confidential data more securely and efficiently.
For more information on confidential containers running on Azure using OpenShift sandboxed containers and its building blocks, please refer to the previous blog in this series Confidential Containers on Azure with OpenShift: A technical deep dive.
Create OpenShift cluster in Azure
Note: You'll need to set up Azure CLI (az) and create required roles as mentioned in the above document.
Ensure you create the OpenShift cluster in a region where AMD SEV-SNP confidential VMs are available. A good default is "eastus". For further details on AMD confidential VM availability, please refer to the following Azure documentation.
Create Azure VM image from pre-built Qcow2 image
Download and convert image
Prerequisites:
- podman/docker
- qemu-img
- curl
The following commands downloads the pre-built Qcow2 image, extracts it and converts it to a VHD file named podvm-coco.vhd
mkdir -p qcow2-img && cd qcow2-img curl -LO https://raw.githubusercontent.com/confidential-containers/cloud-api-adaptor/v0.6.0/podvm/hack/download-image.sh # Download the pre-built podvm qcow2 image bash download-image.sh quay.io/openshift_sandboxed_containers/podvm-image:rhel-cvm-vtpm . -o podvm-coco.qcow2 # Convert it to VHD format qemu-img convert -O vpc -o subformat=fixed,force_size podvm-coco.qcow2 podvm-coco.vhd
Upload VHD to Azure storage
Collect Azure storage and location parameters from the OpenShift cluster:
AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.azure.resourceGroupName}') # Get Azure region AZURE_REGION=$(az group show --resource-group $AZURE_RESOURCE_GROUP --query "{Location:location}" --output tsv) AZURE_STORAGE_ACCOUNT=$(az storage account list -g $AZURE_RESOURCE_GROUP --query "[].{Name:name} | [? contains(Name,'cluster')]" --output tsv) AZURE_STORAGE_EP=$(az storage account list -g $AZURE_RESOURCE_GROUP --query "[].{uri:primaryEndpoints.blob} | [? contains(uri, '$AZURE_STORAGE_ACCOUNT')]" --output tsv) echo "AZURE_REGION=$AZURE_REGION" echo "AZURE_RESOURCE_GROUP=$AZURE_RESOURCE_GROUP" echo "AZURE_STORAGE_ACCOUNT=$AZURE_STORAGE_ACCOUNT" echo "AZURE_STORAGE_EP=$AZURE_STORAGE_EP"
The vhd container is created by OpenShift. Validate its existence; command output should be “vhd”:
az storage container list --account-name $AZURE_STORAGE_ACCOUNT --query "[].{Name:name} | [? contains(Name,'vhd')]" --output tsv --auth-mode login
Get the Azure Storage Key:
AZURE_STORAGE_KEY=$(az storage account keys list --resource-group $AZURE_RESOURCE_GROUP --account-name $AZURE_STORAGE_ACCOUNT --query "[?keyName=='key1'].{Value:value}" --output tsv)
Upload podvm-coco.vhd to vhd storage container:
az storage blob upload --account-name ${AZURE_STORAGE_ACCOUNT} --container-name vhd --name podvm-coco.vhd --file podvm-coco.vhd
Create Azure VM Image
Get Image Gallery
We'll use the image gallery created by OpenShift. Retrieve it using the following command:
GALLERY_NAME=$(az sig list --query "[].{Name: name}" --output tsv --resource-group $AZURE_RESOURCE_GROUP) echo "GALLERY_NAME=$GALLERY_NAME"
Create a new image gallery definition for confidential containers
For CoCo, you'll need to create a new image gallery definition to support confidential VMs. The default image gallery definitions created by OpenShift will not work for CoCo.
export GALLERY_IMAGE_DEF_NAME_COCO=cc-image az sig image-definition create \ --resource-group $AZURE_RESOURCE_GROUP \ --gallery-name $GALLERY_NAME \ --gallery-image-definition $GALLERY_IMAGE_DEF_NAME_COCO \ --publisher myPublisher \ --offer myOffer \ --sku mySKU \ --os-type Linux \ --os-state Generalized \ --hyper-v-generation V2 \ --location $AZURE_REGION \ --architecture x64 \ --features SecurityType=ConfidentialVmSupported
Create Azure VM image version for confidential containers
You'll need a vhd file to create the VM image. Assuming that the vhd file is named podvm-coco.vhd and uploaded to the vhd container under the storage account, the following command creates the VM image.
export VHD_URI="${AZURE_STORAGE_EP}vhd/podvm-coco.vhd" az sig image-version create \ --resource-group $AZURE_RESOURCE_GROUP \ --gallery-name $GALLERY_NAME \ --gallery-image-definition $GALLERY_IMAGE_DEF_NAME_COCO \ --gallery-image-version 0.0.1 \ --target-regions $AZURE_REGION \ --os-vhd-uri "$VHD_URI" \ --os-vhd-storage-account $AZURE_STORAGE_ACCOUNT
Deploy OpenShift sandboxed containers operator and enable confidential containers support
Deploy operator
Deploy the OpenShift sandboxed containers operator using the web console.
Collect and set peer-pods configuration parameters for Azure
Create peer-pods-secret Secret YAML definition
cat > coco-secret.yaml <<EOF apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: AZURE_CLIENT_ID: "${AZURE_CLIENT_ID}" # set AZURE_CLIENT_SECRET: "${AZURE_CLIENT_SECRET}" # set AZURE_TENANT_ID: "${AZURE_TENANT_ID}" # set EOF cat coco-secret.yaml
Create the secret (after validating all fields are populated):
oc apply -f coco-secret.yaml
Collect configuration values for the peer-pods-cm ConfigMap:
# Get the Image ID AZURE_IMAGE_ID=$(az sig image-version list --resource-group $AZURE_RESOURCE_GROUP --gallery-name $GALLERY_NAME --gallery-image-definition $GALLERY_IMAGE_DEF_NAME_COCO --query "[].{Id: id}" --output tsv) # Get Azure region AZURE_REGION=$(az group show --resource-group $AZURE_RESOURCE_GROUP --query "{Location:location}" --output tsv) # Get you Subscription ID AZURE_SUBSCRIPTION_ID=$(az account list --query "[?isDefault].id" -o tsv) # Get your Resource Group AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.azure.resourceGroupName}') AZURE_VNET_NAME=$(az network vnet list --resource-group $AZURE_RESOURCE_GROUP --query "[].{Name:name}" --output tsv) # OpenShift worker subnet ip address cidr AZURE_SUBNET_ID=$(az network vnet subnet list --resource-group $AZURE_RESOURCE_GROUP --vnet-name $AZURE_VNET_NAME --query "[].{Id:id} | [? contains(Id, 'worker')]" --output tsv) AZURE_NSG_ID=$(az network nsg list --resource-group $AZURE_RESOURCE_GROUP --query "[].{Id:id}" --output tsv)
Create peer-pods-cm ConfigMap YAML definition:
cat > coco-cm.yaml <<EOF apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "azure" VXLAN_PORT: "9000" AZURE_IMAGE_ID: "${AZURE_IMAGE_ID}" AZURE_INSTANCE_SIZE: "Standard_DC2as_v5" # confidential VM AZURE_REGION: "${AZURE_REGION}" AZURE_SUBSCRIPTION_ID: "${AZURE_SUBSCRIPTION_ID}" AZURE_RESOURCE_GROUP: "${AZURE_RESOURCE_GROUP}" AZURE_SUBNET_ID: "${AZURE_SUBNET_ID}" AZURE_NSG_ID: "${AZURE_NSG_ID}" PROXY_TIMEOUT: "15m" # helpful for debugging EOF cat coco-cm.yaml
Deploy the ConfigMap (after validating all fields are populated correctly):
oc apply -f coco-cm.yaml
Generate SSH keys and create a secret:
ssh-keygen -f ./id_rsa -N "" oc create secret generic ssh-key-secret -n openshift-sandboxed-containers-operator --from-file=id_rsa.pub=./id_rsa.pub --from-file=id_rsa=./id_rsa
Create KataConfig CRD
Deploy KataConfig through operator hub or apply the following configuration:
cat > kataconfig.yaml <<EOF apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true # kataConfigPoolSelector: # matchLabels: # custom-kata1: test EOF cat kataconfig.yaml oc apply -f kataconfig.yaml
Wait for kata-oc MachineConfigPool (MCP) to be in UPDATED state (once UPDATEDMACHINECOUNT equals MACHINECOUNT):
watch oc get mcp/kata-oc
Running a sample workload
Here we’ll demonstrate how to run a simple application as a confidential container using Azure’s confidential VM.
Create a hello-openshift.yaml file with the following contents:
cat > hello-openshift.yaml <<EOF apiVersion: v1 kind: Pod metadata: name: hello-openshift labels: app: hello-openshift spec: runtimeClassName: kata-remote-cc containers: - name: hello-openshift image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8888 securityContext: privileged: false allowPrivilegeEscalation: false runAsNonRoot: true runAsUser: 1001 capabilities: drop: - ALL seccompProfile: type: RuntimeDefault --- kind: Service apiVersion: v1 metadata: name: hello-openshift-service labels: app: hello-openshift spec: selector: app: hello-openshift ports: - port: 8888 EOF
Deploy by running the following command:
oc apply -f hello-openshift.yaml oc get pod/hello-openshift
Create an OpenShift route by running the following command:
oc expose service hello-openshift-service -l app=hello-openshift APP_URL=$(oc get routes/hello-openshift-service -o jsonpath='{.spec.host}')
Check if the application is responding with Hello OpenShift!
message:
curl ${APP_URL}
Destroy the sample workload:
oc delete -f hello-openshift.yaml
Retrieving workload secrets via remote attestation
The following section describes a scenario where the application retrieves secrets from the Key Broker Service (KBS) after remote attestation. The application uses the Kubernetes initContainer pattern to initiate remote attestation and key retrieval from the KBS.
The following sections describe a demo scenario where secrets are retrieved from the KBS by the application. For ease of use, we have deployed the KBS on the same OpenShift cluster.
Deploy Key Broker Service
KBS is a remote attestation entry point that integrates the Attestation Service to verify Trusted Execution Environment (TEE) evidence. Details on remote attestation and KBS can be found in the technical deep-dive blog.
Create and configure the namespace for KBS deployment:
oc new-project coco-kbs # Allow anyuid in kbs pod oc adm policy add-scc-to-user anyuid -z default -n coco-kbs
Create some secrets that will be sent to the application by the KBS:
openssl genpkey -algorithm ed25519 >kbs.key openssl pkey -in kbs.key -pubout -out kbs.pem # Create an application secret openssl rand 128 > key.bin # Create a secret object from the kbs.pem file. oc create secret generic kbs-auth-public-key --from-file=kbs.pem -n coco-kbs # Create a secret object from the user key file (key.bin). oc create secret generic keys --from-file=key.bin
Create the local KBS deployment and service YAML definition file:
cat > kbs-deployment.yaml <<EOF --- apiVersion: apps/v1 kind: Deployment metadata: name: kbs namespace: coco-kbs spec: selector: matchLabels: app: kbs replicas: 1 template: metadata: labels: app: kbs spec: containers: - name: kbs image: quay.io/openshift_sandboxed_containers/kbs:dev-preview ports: - containerPort: 8080 imagePullPolicy: Always command: - /usr/local/bin/kbs - --socket - 0.0.0.0:8080 - --auth-public-key - /kbs/kbs.pem - --insecure-http # https omitted for brevity securityContext: runAsUser: 0 volumeMounts: - name: kbs-auth-public-key mountPath: /kbs/ - name: keys mountPath: /opt/confidential-containers/kbs/repository/mysecret/workload_key volumes: - name: kbs-auth-public-key secret: secretName: kbs-auth-public-key - name: keys secret: secretName: keys --- # Service to expose the KBS. apiVersion: v1 kind: Service metadata: name: kbs namespace: coco-kbs spec: selector: app: kbs ports: - protocol: TCP port: 8080 targetPort: 8080 --- EOF
The application secret is available under mysecret/workload_key
Deploy KBS:
oc apply -f kbs-deployment.yaml # wait for it to be ready oc wait --for=condition=available deployment/kbs -n coco-kbs
Deploy sample workload
# Get IP address of the KBS service and set KBS_URL: KBS_IP=$(oc get service kbs -n coco-kbs -o=jsonpath='{.spec.clusterIP}') KBS_URL="http://${KBS_IP}:8080" echo ${KBS_URL}
Create YAML definition of the sample workload:
cat > workload.yaml <<EOF apiVersion: v1 kind: Pod metadata: name: simple-key-release-demo labels: app.kubernetes.io/name: simple-key-release-demo spec: runtimeClassName: kata-remote-cc containers: - name: simple-key-release-demo image: quay.io/openshift_sandboxed_containers/simple-key-release-demo:1.0.0 imagePullPolicy: Always securityContext: privileged: false allowPrivilegeEscalation: false runAsNonRoot: true runAsUser: 1001 capabilities: drop: - ALL seccompProfile: type: RuntimeDefault env: - name: ENCRYPTED_FILE_URL value: "" - name: KEY_FILE_PATH value: "/data/key.bin" volumeMounts: - name: data mountPath: /data volumes: - name: data emptyDir: {} initContainers: - name: attestor image: quay.io/openshift_sandboxed_containers/attestation-init-container:1.0.0 imagePullPolicy: Always env: - name: KBS_URL # value: "http://<kbs-ip>:8080" value: "{KBS_URL}" - name: KBS_RESOURCE_ID value: "/mysecret/workload_key/key.bin" - name: KEY_FILE_PATH value: "/data/key.bin" volumeMounts: - name: data mountPath: /data securityContext: privileged: false allowPrivilegeEscalation: false runAsNonRoot: true runAsUser: 1001 capabilities: drop: - ALL seccompProfile: type: RuntimeDefault EOF
The application uses the KBS_RESOURCE_ID variable with value /mysecret/workload_key/key.bin
to retrieve the secret from the KBS.
Deploy workload.yaml:
oc project default oc apply -f workload.yaml
This should deploy the workload and you can see in the application attestor container logs how the secret is retrieved and consumed by the application.
Cleanup
Destroy the OpenShift cluster:
openshift-install --dir=<path-to-install-artifacts> destroy cluster
Note: Ensure that the resource group and all its contents are deleted by verifying from the Azure console to avoid accidental costs.
Conclusion
In this blog post we looked at confidential containers functionality that is made available via Red Hat OpenShift sandboxed containers. We learned how to create and upload a confidential VM image for the pod. We also demonstrated deploying a simple workload running as a confidential container backed by Azure Confidential VM and how applications retrieved secrets using the remote attestation procedure.
For a more in-depth understanding of confidential containers running on Azure and the core principles behind them, we recommend referring to our previous blog post in this series: Confidential Containers on Azure with OpenShift: A technical deep dive.
Please note that confidential containers is currently available in a dev-preview mode, and we encourage you to keep experimenting, exploring and sharing your feedback with us.
About the authors
Pradipta is working in the area of confidential containers to enhance the privacy and security of container workloads running in the public cloud. He is one of the project maintainers of the CNCF confidential containers project.
Suraj Deshmukh is working on the Confidential Containers open source project for Microsoft. He has been working with Kubernetes since version 1.2. He is currently focused on integrating Kubernetes and Confidential Containers on Azure.
Jens Freimann is a Software Engineering Manager at Red Hat with a focus on OpenShift sandboxed containers and Confidential Containers. He has been with Red Hat for more than six years, during which he has made contributions to low-level virtualization features in QEMU, KVM and virtio(-net). Freimann is passionate about Confidential Computing and has a keen interest in helping organizations implement the technology. Freimann has over 15 years of experience in the tech industry and has held various technical roles throughout his career.
Magnus has received an academic education in Humanities and Computer Science. He has been working in the software industry for around 15 years. Starting out in the world of proprietary Unix he quickly learned to appreciate open source and has applied it everywhere since. He was engaged mostly in the niches of automation, virtualization and cloud computing. During his career in various industries (mobility, sustainability, tech) he exercised various contributor and leadership roles. Currently he's employed as a Software Engineer at Microsoft, working in the Azure Core organization.