This article is the third in a six-part series (see our previous blog), where we present various usage models for confidential computing, a set of technologies designed to protect data in use—for example using memory encryption—and the requirements to get the expected security and trust benefits from the technology.
In this third article, we consider the four most important use cases for confidential computing: confidential virtual machines, confidential workloads, confidential containers and confidential clusters. This will allow us to better understand the trade-offs between the various approaches, and how this impacts the implementation of attestation.
Usage models of confidential computing
In the existing implementations (with the notable exception of Intel SGX), confidential computing is fundamentally tied to virtualization. A trust domain corresponds to a virtual machine (VM), each domain having its own encryption keys and being isolated from all other domains, including the host the VM is running on.
There are several usage models to consume these basic building blocks:
- A confidential virtual machine (CVM) is a VM running with the additional protections provided by confidential computing technologies, and obeying security requirements to ensure that these protections are useful. Running SEV-SNP instances on Azure is an example of this use case.
- A confidential workload (CW) is a very lightweight VM using virtualization only to provide some level of isolation, but otherwise using host resources mostly in the same way as a process or container would. This use case is exemplified by libkrun, and can now be used with Podman using the “krun” runtime.
- Confidential containers (CCn) will use lightweight VMs as Kubernetes pods to run containers. The primary representative in that category is the Confidential Containers project, derived from Kata Containers, which recently joined the Cloud Native Computing Foundation (CNCF).
- A confidential cluster (CCl) is a cluster of confidential virtual machines, which are considered to be part of a single trust domain. The Constellation project is one of the early offerings in that space, and provides a consistent analysis of the security implications in that problem space.
There may be more than what we list here, but at the time of writing, these four use cases are the current development focus of the free software community.
Confidential VMs
The most direct application of the confidential computing technology are confidential VMs. This use case takes advantage of the technology without wrapping it into additional logic or semantics.
In order to get the full benefits of the additional confidentiality, however, we must secure the rest of the system so that the data that we protect through memory encryption cannot be recovered, for example from a non-encrypted disk image. Consequently, a CVM must use encrypted disks and networking. It also needs to use a secure boot path in order to guarantee that the system software running in the VM is correct and hasn't been tampered with.
This model is useful to run standard applications (as opposed to containerized ones) and independent operating systems, or when the owner of the VM can define ahead of time a complete execution environment. In such scenarios, the owner needs to build and encrypt individual disk images for the VMs that will generally contain everything necessary to run the application. Notably, various application secrets may reside on the disk image itself.
As a result, in that configuration, the primary security concern is to prevent the confidential software from running in a possibly compromised environment. We want to preclude the host from tampering with boot options, or from starting the VM with a random, and possibly malicious, firmware or kernel, which could be used to leak data.
In the cloud, one way to achieve this objective is to tie the encryption keys for the disk to a specific system software configuration. This can be done by sealing the required encryption keys in a virtual Trusted Platform Module (vTPM), so that they can only be used with a VM associated with that specific TPM, and only when the TPM-measured boot configuration matches the desired policy. Note that for this to be robust, the vTPM itself needs to be protected by the underlying confidential computing technology, the attestation of the vTPM being linked to the attestation of the confidential computing system.
In this post, we will illustrate this approach by explaining how Red Hat Enterprise Linux 9.2 can take advantage of Azure SEV-SNP instances, using an Azure-provided virtual TPM. In that scenario, Microsoft provides system-facing attestation, validating the initial system measurements through the Microsoft Azure Attestation service. This unlocks keys used to decrypt the vTPM state.
On premise, or if you control the hardware directly, you may want to deploy your own attestation services. As we will see below, the way to do that largely depends on the target platform.
Confidential workloads
Confidential workloads are an innovative way to run containers using a very lightweight virtualization technique, where the guest kernel is packaged as a shared library in the host. The open source project that introduced this lightweight virtualization model is called libkrun, and the tool to run containers from standard container images is called krunvm.
This model is useful to quickly run and deploy small container-based applications, typically with a single container. The driving factor for confidential workloads is quick startup time and reduced resource usage for higher density. The current implementation also features a good integration with Podman. The ecosystem includes tooling to create workload images from OCI container images and a simple attestation server. This is well described in this blog post that contains an illustrative demonstration.
The primary concern in this scenario is to ensure that the workload is running in a Trusted Execution Environment (TEE) with a valid system software stack, and that only the workload is running there. This concern is expressed as follows by the above blog:
When intending to run a confidential workload on another system (e.g. on a machine from a cloud provider), it is reasonable for a client to inquire “How do I know this workload is actually running on a TEE, and how do I know that my workload (and ONLY my workload) are what is running inside this TEE?”. For sensitive workloads, a client would like to ensure that there is no nefarious code/data being run, as this code/data can be violating the integrity of the TEE.
In this scenario, the entire workload, including both the kernel and user space, is therefore registered for attestation, and also has a valid configuration to run it. A successful attestation will deliver the disk encryption key, unlocking the disk that the workload needs. In that respect, confidential workloads, while working a bit like containers, are actually closer to confidential virtual machines. A consequence of this is that you need to build each individual workload and register it with the attestation server.
Confidential Containers
Confidential Containers is a sandbox project in the Cloud Native Computing Foundation (CNCF). It derives from Kata Containers, a project using virtualization to run containers, using a VM for each pod (a pod is a Kubernetes unit that can contain one or more related containers). The two projects share most of the same developer community, and the “confidential” aspect is merged back into Kata Containers on a regular basis. So Confidential Containers is a kind of an “advanced development branch” of Kata Containers, more than it is a fork.
As a result, the Confidential Containers project inherits a solid foundation for a project that is still so early in its development, including a vibrant community, a number of industrial potential users, know-how and resources on best practices, continuous integration (CI) and the collaboration of heavyweights such as Alibaba, Ant Group, IBM, Intel, Microsoft, Red Hat and many others. However, the project is still in its infancy, with version 0.5 to be released in April 2023 and a release schedule of about six weeks.
One of the primary concerns for this project is to make confidential computing easy to deploy and consume at scale. This implies being able to ignore, to the largest possible extent, details of the hosts being used to provide the resources, including their CPU architecture, and integrating well with existing orchestration tools such as Red Hat OpenShift or Kubernetes. The project is currently developing and testing with AMD SEV, SEV-ES and SEV-SNP, Intel SGX and TDX, and IBM s390x SE. The installation of all the required artifacts for this project is also made relatively simple thanks to a Kubernetes operator that deploys the necessary software on a cluster, and makes them easy to consume using the widely used Kubernetes concept of runtime class.
Note that Kata Containers lost the compatibility with Podman in version 2.0, which makes it less convenient to use from the command-line than confidential workloads. This is being worked on.
The attestation procedure is similarly flexible, with a generic architecture that can deal with local and remote attestation, including pre-attestation as required for early versions of AMD SEV, or firmware-based attestation as is required for the IBM s390x SE platform. There is even support for disk-based “attestation” to make it possible to develop and test on platforms that do not support confidential computing.
At least in the current implementation, the attestation process only covers the execution environment, but not the workload being downloaded. This could change over time, as the project is discussing the structure of the reference provider service, appraisal policies or container metadata validation. But the current approach means that there is a bit more flexibility in the deployment of workloads, since you do not necessarily have to register each individual workload, but only each individual class of execution environment.
A distantly-related project is Inclavare Containers, which is based on the Intel SGX enclave model. The confidential containers community is in the process of integrating the Inclavare project so both virtualization based TEE and process based TEE (SGX) will be supported.
Confidential clusters
Confidential clusters are the last use case we are going to discuss here. Edgeless Constellation is an open source implementation of this approach.
In that approach, entire Kubernetes clusters are built out of confidential VMs. This makes it somewhat easier to deploy an infrastructure where everything, from individual containers to the cluster’s control plane, runs inside a set of confidential VMs: when all the nodes in the cluster are confidential VMs, all the containers running within the cluster (on a confidential VM) are confidential as well. This makes it relatively easy to deploy even the most complicated combinations of containers, including operators or deployments.
In addition to the per-CVM (single node) attestations that make sense for the earlier scenarios, new concerns emerge, such as making sure that a non-confidential node does not join a confidential cluster, which would facilitate leaks of confidential data. For that reason, Constellation provides additional attestation services, a JoinService to verify that a new node can safely join a cluster and a user-facing VerificationService to check if a cluster is legitimate.
Conclusion
The four major usage models for confidential computing use the same underlying technology in different ways. This leads to important differences in how trust is established, what kind of proof is expected and who expects these proofs. In the next article, we will discuss the general principles of moving from a root of trust to actual trust in a system.
執筆者紹介
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
オリジナル番組
エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー
製品
ツール
試用、購入、販売
コミュニケーション
Red Hat について
エンタープライズ・オープンソース・ソリューションのプロバイダーとして世界をリードする Red Hat は、Linux、クラウド、コンテナ、Kubernetes などのテクノロジーを提供しています。Red Hat は強化されたソリューションを提供し、コアデータセンターからネットワークエッジまで、企業が複数のプラットフォームおよび環境間で容易に運用できるようにしています。
言語を選択してください
Red Hat legal and privacy links
- Red Hat について
- 採用情報
- イベント
- 各国のオフィス
- Red Hat へのお問い合わせ
- Red Hat ブログ
- ダイバーシティ、エクイティ、およびインクルージョン
- Cool Stuff Store
- Red Hat Summit