Artificial intelligence (AI) workloads are transforming industries from financial services to healthcare. However, the use of AI models introduces risk around protecting models, weights, and data from malicious actors. While the industry has established robust traditional security frameworks to protect data at rest (with disk encryption, such as LUKS) and data in transit (with encrypted communication channels like TLS), a gap remains around data that's in use.
When sensitive data, such as patient medical records or proprietary AI model weights are actively loaded into the CPU, GPU, and memory for processing, it must be decrypted. In a traditional cloud environment, this leaves the data fully exposed to compromised hypervisors, malicious cloud administrators, memory dump attacks, and the cloud provider itself.
To address the gap around protecting data in use, organizations need a strategy built on confidential computing. In this blog, we explore an infrastructure strategy featuring both Red Hat Enterprise Linux (RHEL) and Red Hat OpenShift. Designed for streamlined scalability, you can start with a smaller, cost-effective RHEL-based deployment for your initial AI workloads, and then grow into a full-scale OpenShift deployment as needed, all while maintaining the same security posture and zero-trust architecture.
What is confidential computing?
Confidential computing is designed to protect sensitive data that must be decrypted for active processing (data in use). In its decrypted state, data in use becomes exposed to threats, such as memory dump attacks and hypervisor exploits. Confidential computing shifts protection away from software-based isolation and into a specialized hardware feature called a trusted execution environment (TEE). Built on modern processors, such as Intel TDX or AMD SEV-SNP, a TEE carves out a secure, isolated enclave within the hardware where data remains encrypted even during active computation.
The three pillars of confidential computing are:
- Runtime memory encryption: All data stored within the RAM of the confidential virtual machine (VM) is actively encrypted using cryptographic keys managed strictly by the CPU. These keys are entirely inaccessible from outside the VM.
- Execution isolation: The TEE creates a physical and mathematical wall around the workload. This means that the hypervisor, the host operating system, and the cloud provider administrators cannot inspect, modify, or interfere with the code executing inside the enclave.
- Remote attestation: This is a critical validation mechanism in confidential computing. Before any sensitive data or decryption keys are released to the environment, the system performs a cryptographic "handshake." Attestation provides hardware-derived proof that the environment is genuine, correctly configured, and running untampered software. If this check fails, the data remains sealed. In the context of this blog, we use Trustee as the attestation solution (other attestation solutions are also available).
By creating a hardware-locked "box" where data can be processed without exposure, confidential computing essentially re-materializes confidentiality. It moves the protection of sensitive data from a fragile promise or compliance contract into reality.
In this article, we focus on two of Red Hat's confidential computing offerings:
- OpenShift confidential containers: Based on the upstream confidential containers project and focused at the container level.
- RHEL confidential virtual machines (CVM) for public cloud: Provides confidential capabilities at the VM level. In the context of this article, we focus on an Azure RHEL CVM, which is generally available.
For additional information on OpenShift confidential containers, read Learn about Confidential Containers.
For additional information on Azure RHEL CVMs, read Learn about Red Hat Confidential Virtual Machines.
The Developer vs Operation Security persona
Red Hat confidential computing provides a solution that doesn't mean destroying productivity. Consider the challenge of connecting two fundamentally different roles, the application developer and operation security, working on the same product:
- Application developer (Janine): Janine is a data scientist whose goal is to build and deploy critical AI models. For example, a RoBERTa-based de-identification model designed to hunt down and redact protected health information (PHI) from clinical notes. For the developer, speed is everything. She requires powerful cloud GPUs to test her workloads. However, she does not know anything about confidential computing, nor does she want to. She simply wants to write her code, commit her work, and see it deployed seamlessly without being bogged down by complex cryptographic setups.
- Operation security (Raj): Raj is the operational security (OpSec) lead, accountable for safeguarding the organization's valuable data and achieving a strict zero trust security posture across a hybrid cloud environment. His priority is absolute security. He needs comprehensive oversight, the ability to manage attestation, and the tools to block advanced attackers trying to steal data.
The challenge we are targeting in this architecture is how to get these two personas to collaborate on the same AI product without friction. The developer is completely shielded from the internal security mechanics, experiencing what feels like a standard "golden path" workflow. She never has to tick a Make this confidential checkbox, it just happens automatically. Meanwhile, the OpSec persona has total visibility and control, utilizing a powerful "single pane of glass" UI dashboard. Through this dashboard, Raj can see all policies across his entire fleet, monitor whether hardware attestation has succeeded, and quickly spot visual alerts if a security violation is detected, all without slowing Janine down.
Janine is an app developer prioritizing speed and usability, and Raj is OpSec prioritising zero trust and oversight.
Building a trust-aware DevSecOps pipeline
To connect the developer's rapid workflow with the operations team's rigid security infrastructure, we lean on a DevSecOps approach in which the security guardrails for confidential computing are baked into the pipeline by default rather than bolted on at the end. GitOps acts as the developer-facing entry point, and everything that follows (automated build, attestation, signing and reference-value registration) runs without further human intervention.
The developer experience itself begins and ends with a single Git command. Janine simply types git commit -am "Update model v1.3" && git push origin test into her IDE, and the pipeline takes it from there.
This single git push transparently triggers a CI pipeline (such as the one provided by the Red Hat Advanced Developer Suite). The pipeline builds the application inside an isolated container, generates SLSA provenance, cryptographically signs the resulting container image with tools like cosign, and registers the cryptographic fingerprint (the reference values for the build) directly with Raj's on-premise Trustee Reference Value Provider Service. From that point on, those reference values become the source of truth that Trustee uses to attest any workload running on the confidential enclave, closing the loop between what the developer pushed and what operations allow it to run.
The DevSecOps pipeline: GitOps trigger, secure build, signing and attestation, all automated.
The OpenShift solution: Sandboxed containers
The OpenShift deployment demonstrates the persona separation and advanced threat protection in action. Behind the scenes, the OpSec persona (Raj) builds and configures the entire security-centric foundation. He deploys the OpenShift staging cluster, sets up the Trustee attestation server, and configures the critical compliance policies from his dashboard. To protect the AI workload, Raj establishes a strict two-gate zero trust model:
- Gate 1 (code integrity): Only workloads bearing a trusted cryptographic signature proving they were built by Janine's official pipeline are allowed to start.
- Gate 2 (environment attestation): Workload specific secrets are only released if the environment passes a hardware attestation check.
The Trustee attestation server is what we use to enforce both gates.
Follow the steps from the operation security persona's point of view:
- Establish the trusted cluster: Raj installs and configures the on-premises trusted cluster, including the Red Hat build of Trustee remote attester, Red Hat OpenShift Pipelines, and Red Hat OpenShift GitOps operators.
- Enforce security policies: Raj configures the Trustee with reference values (RVP) and implements a two-gate security policy. Gate 1 is for image signature verification and gate 2 is for conditional release of workload secrets based on hardware attestation.
- Automate standard workload creation: Raj configures the standard build pipeline to automatically fetch Janine's code, build, sign the container image with Cosign, push it to a registry, and update the deployment template.
- Automate workload hardening: Raj creates a pipeline to automatically transform the standard container by adding confidential specifications.
- Automate multi-cloud deployment: Raj sets up the CVM pipeline to deploy the finalized, hardened manifest into RHEL CVMs on different cloud environments (Azure and AWS).
- Configure deployment and monitoring: Raj uses OpenShift GitOps to automatically deploy the hardened workload to the untrusted deployment cluster using confidential containers, and monitors overall security compliance from a custom security compliance dashboard.
Because Raj has already laid this groundwork, Janine's GitOps push effortlessly passes both gates, and her model goes live seamlessly. The following image visualizes these steps:
Raj's security operations workflow
The power of OpenShift sandboxed containers also comes into play during attacks. Imagine a malicious actor manages to deploy an altered container to the staging environment. The attacker possesses a properly signed container image, allowing them to bypass gate 1. But the attacker attempts to run this container on a standard, non-confidential node to try and observe the AI model while it is running. As the pod tries to perform remote attestation (gate 2), the hardware evidence fails to match Raj's policy. Instantly, Raj's security dashboard flashes a critical red warning: SECURITY VIOLATION DETECTED - EVENT: DENIED. The attestation completely fails, any secret is firmly denied by the key broker service, and the workload is blocked, forcing the attacker's pod into a CrashLoopBackOff state. The AI model was never unsealed, and the data remains mathematically protected.
The following is an example of a UI Raj uses to track the attested workloads:
An example UI for tracking attestation.
The RHEL solution: Confidential virtual machines (CVMs)
When an organization is starting small or specifically requires a virtualized footprint, the mechanism is equally protected but tailored for the RHEL environment. The core of this solution isn't just provisioning a standard RHEL confidential virtual machine (CVM), but instead utilizes RHEL confidential computing system roles and a GitOps flow to fully configure the Trustee attestation server, client, Podman and other relevant parts of the E2E flow.
Because relying on manual bootstrapping or hardcoded cloud-init scripts is fundamentally insecure and difficult to scale, this GitOps framework automatically provisions the infrastructure. It creates the necessary CVMs—leveraging an advanced TEE, like AMD SEV-SNP or Intel TDX, for hardware-level memory encryption—and vitally, it invokes dedicated confidential computing RHEL system roles. These system roles automatically install and configure the Trustee client (the attestation agent) and the required Podman environment inside the CVM without manual admin intervention.
Let's follow the steps from the perspective of the operation security persona (Raj):
After the CVM pipeline deploys the RHEL confidential virtual machine on cloud infrastructure (Azure or AWS), it proceeds with a secure attestation and secret retrieval sequence.
- CVM boot and component installation: The RHEL CVM is deployed on specialized, hardware-backed confidential instance types for memory encryption. The CVM pipeline's use of the RHEL Linux system role installs the necessary attestation components.
- Attestation report generation: As the machine boots or the workload attempts to retrieve secrets, the attestation agent collects an attestation report, which contains untamperable hardware and software system measurements.
- Remote validation: The attestation agent sends this attestation report to the remote attester (the Red Hat build of Trustee server) for validation.
- Policy enforcement and secret release: The Trustee compares the received measurements against its defined reference values (RVPs). If the attestation report is validated against the expected security policy, the Trustee returns the necessary secrets. These secrets include the key required for automatic LUKS2 disk encryption and decryption for secure storage and mTLS certificates.
- Secure workload execution: With the necessary keys and secrets retrieved and the secure storage mounted, the application is then securely executed on the CVM using Podman.
Through this process, the RHEL CVM executes the AI workload only within the isolated, encrypted memory of the TEE, making the data invisible even to the public cloud administrator.
The following image visualizes these steps:
Alt: RHEL confidential virtual machine secure boot and attestation flow.
It should be noted that from Janine the developer's perspective, all of this is transparent. That is, Janine isn't aware of all the security configuration being done to run her workload, and for her, it's just a typical GitOps flow resulting with her workload running.
Demo: Native dev workflows vs. hardened SecOps monitoring
Conclusion: How do these solutions come together
Whether your organization is running the streamlined, Ansible-driven RHEL CVM configuration or scaling up to full OpenShift sandboxed containers, the result is an integrated, multi-cloud pipeline. Confidential computing acts as the ultimate bridge, essentially re-materializing confidentiality by linking cloud agility with on-premise security.
A hardware-level "box" where data can be processed without exposure helps break the "digital paradox". The automated GitOps workflow remains transparent to the innovators building the AI models, while delivering uncompromising, dashboard-driven governance to the OpSec teams defending the perimeter. Ultimately, this unified Red Hat portfolio offers the developer speed, and the operation security admin gets security.
製品トライアル
Red Hat Enterprise Linux | 製品トライアル
執筆者紹介
Marcos Entenza, a.k.a Mak, works on the core Red Hat OpenShift Container Platform for hybrid and multi-cloud environments to enable customers to run Red Hat OpenShift anywhere. Mak is an experienced Product Manager passionate about building scalable infrastructures and he oversees installation, provider integration, and confidential computing on OpenShift.
Emanuele Giuseppe Esposito is a Software Engineer at Red Hat, with focus on Confidential Computing, QEMU and KVM. He joined Red Hat in 2021, right after getting a Master Degree in CS at ETH Zürich. Emanuele is passionate about the whole virtualization stack, ranging from Openshift Sandboxed Containers to low-level features in QEMU and KVM.
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
仮想化
オンプレミスまたは複数クラウドでのワークロードに対応するエンタープライズ仮想化の将来についてご覧ください