Red Hat is excited to announce the release of Red Hat OpenShift sandboxed containers 1.12 and Red Hat build of Trustee 1.1, marking a major leap forward in our confidential computing journey. These releases graduate confidential containers on bare metal from Technology Preview to General Availability (GA), delivering production-ready, hardware-based memory encryption and attestation for on-premise and hybrid cloud infrastructure, 

We are also introducing Technology Preview support for confidential containers with NVIDIA Confidential Computing, paving the way for digital sovereignty, hardened environments for  AI, and machine learning workloads at scale.

Together, these milestones provide organizations with a holistic approach for protecting their most sensitive workloads on any deployment. This was already possible in the cloud, and is now also available for on-premise hardware with a confidential computing platform built on Red Hat OpenShift, providing the level of consistency and support that enterprises expect.

OpenShift sandboxed containers 1.12: Enterprise-grade confidential computing everywhere

OpenShift sandboxed containers 1.12: Enterprise-grade confidential computing everywhere

OpenShift sandboxed containers 1.12 builds on the strong foundation established in previous releases, extending confidential computing capabilities to bare-metal environments and confidential GPU-accelerated workloads. It further hardens the platform's security posture and usability across cloud and on-premises deployments.

Feature highlights include:

  • Persistent volume support with encrypted block storage for confidential workloads
  • Sealed secrets for protected provisioning of sensitive data to confidential workloads
  • Automated hardware node discovery and RuntimeClass management for NVIDIA accelerated computing and confidential GPU workloads
  • Assisted installer enhancements for simplified confidential container deployment

Confidential containers on bare metal: Now generally available

One of the capabilities that our customers and partners have requested most often is the ability to run confidential containers directly on their own physical infrastructure. With OpenShift sandboxed containers 1.12, confidential containers on bare metal graduates from Technology Preview to General Availability, providing full production support with enterprise-grade reliability and Red Hat service-level agreement (SLA) commitments.

This GA release supports the following trusted execution environment (TEE) hardware:

  • Intel Trust Domain Extensions (TDX): On compatible Intel bare-metal hardware
  • AMD Secure Nested Paging (SEV-SNP): On compatible AMD hardware
  • IBM Secure Execution for Linux (SEL): Telum processor support for IBM LinuxONE environments

The GA release delivers the automation, stability, and supportability that production environments demand. The OpenShift sandboxed containers operator now automatically handles the full lifecycle of the confidential computing stack on bare-metal nodes:

  1. Detecting TEE hardware: The operator automatically detects and labels nodes for AMD SEV-SNP, Intel TDX, and IBM SEL, eliminating the need for manual hardware discovery.
  2. Creating runtimes: It dynamically provisions the kata-cc RuntimeClass, making confidential workloads immediately schedulable on TEE-capable nodes.
  3. Configuring the host: It manages CRI-O configuration via MachineConfigs to activate the runtime class, ensuring consistent and reproducible cluster configuration.

Enterprise teams working in regulated industries, government environments, or organizations with strict data sovereignty requirements can now deploy workloads with hardware-enforced memory encryption and attestation, protecting data in use from infrastructure administrators, co-tenants, and other untrusted parties—all fully supported under Red Hat’s enterprise SLA. Key non-functional requirements now met at GA include security hardening via hardware-enforced memory encryption and secure boot chains, operator-based lifecycle management for automated upgrades and configuration, and full integration with Red Hat OpenShift rule-based access control, network policy, and pod security standards.

Compliance teams can now demonstrate that workloads meet regulatory requirements for data-in-use protection, including those in the GDPR, HIPAA, and PCI-DSS frameworks, on their own bare-metal infrastructure.

For detailed information on this topic, see our documentation and the earlier blog post “Introducing confidential containers on bare metal.”

Confidential containers for AI workloads with confidential GPU accelerators: Technology Preview

Artificial intelligence is transforming every industry. Confidential containers already provide TEE protection for data in use at the CPU level. With this collaboration between NVIDIA and Red Hat, confidential containers will now extend the TEE to GPU accelerators as well. OpenShift sandboxed containers 1.12 includes for the first time Technology Preview support for confidential containers with confidential GPU accelerators, extending the hardware-based TEE protections that confidential containers provide to GPU-accelerated AI workloads.

How it works

This Technology Preview integrates Confidential Computing with the Red Hat OpenShift confidential containers stack. The solution builds on the existing confidential containers TEE architecture, extending TEE protections to GPU memory and computation, and integrating with the OpenShift NVIDIA GPU Operator for resource management.

The NVIDIA Hopper GPU is the primary supported platform, with full Confidential Computing capability.

Key capabilities delivered in this Technology Preview:

  • GPU memory encryption during computation: Data, model, and model weights are hardware-encrypted in GPU memory throughout processing, extending the TEE to include both CPU and GPU and protecting against host-level inspection
  • GPU TEE attestation: Leveraging NVIDIA Remote Attestation Service (NRAS) enabled Trustee attestation for verification of GPU as part of TEE integrity before workload deployment, ensuring that the computation environment is trusted
  • Integration with NVIDIA GPU Operator: GPU resource management via the standard NVIDIA GPU Operator, enabling confidential GPU workloads alongside standard GPU workloads
  • Support for common AI/ML frameworks: Compatible with TensorFlow, PyTorch, and NVIDIA CUDA-based applications, allowing data scientists to protect existing workloads without code changes
  • RuntimeClass configuration: Dedicated RuntimeClass for GPU-enabled confidential containers, making it easy to declare workload-level confidentiality requirements

This feature carries a Technology Preview support level with clearly documented limitations and dedicated feedback channels. We encourage early adopters including AI/ML engineers, data scientists, security engineers, and platform administrators managing GPU workloads,to explore this capability and share feedback that will shape the roadmap toward general availability.

Red Hat build of Trustee 1.1: Extending attestation across environments

Red Hat build of Trustee 1.1 builds on the solid enterprise-grade remote attestation foundation established in version 1.0, delivering incremental enhancements that further simplify deployment, improve operational visibility, and extend support for the growing range of TEE hardware supported across bare-metal and cloud environments.

Red Hat build of Trustee 1.1 continues its role as the cornerstone for policy-driven secret management in confidential computing environments, ensuring that secrets are released only to hardware-attested workloads and never exposed in plain text to cluster administrators. Key attestation and secret management workflows are now validated across the expanded hardware matrix introduced with OpenShift sandboxed containers 1.12, including Intel TDX, AMD SEV-SNP, and IBM SEL bare-metal deployments at GA, as well as NVIDIA Hopper confidential GPU environments in Technology Preview.

Feature highlights include:

  • Support for NRAS through Trustee for Confidential GPU workloads
  • Enhanced observability and troubleshooting capabilities with automatic Trustee log collection and metrics support
  • Disconnected environment support for air-gapped Trustee deployments with AMD SEV-SNP TEE
  • Simplified user experience for Trustee configuration

Real-world impact: Protecting the workloads that matter most

The value of confidential containers is already being proven in high-stakes production environments. DBS Bank, Singapore’s largest bank, replatformed its digital asset custodian operations on Red Hat OpenShift using confidential containers. This world-first deployment delivered a scalable, security-focused foundation for digital asset services while reducing operational risk. The implementation earned DBS Bank the AI & Emerging Tech category win at the Red Hat APAC Innovation Awards 2025.

As Ang Li Khim, Group Head of DBS Bank Institutional Banking Group Technology, noted, “Our collaboration with Red Hat on the production deployment of confidential containers on our digital assets infrastructure has enabled us to innovate at greater speed and scale, providing secure and resilient services to our customers.”

With bare-metal GA now available, organizations across financial services, healthcare, defense, and other regulated industries gain a new deployment option that combines the performance and control of on-premise infrastructure with the verifiable hardware isolation of confidential computing.

See it in action: Protecting your model IP end-to-end

Proprietary model vendors invest years and significant resources into developing differentiated AI capabilities. Distributing those models to enterprise customers shouldn't mean giving up control of that intellectual property. This demo illustrates how a model owner can distribute encrypted weights to an on-premise inference environment that’s operated by a third party, with cryptographic guarantees that decryption keys are released only to a verified, tamper-proof TEE. There are no trust assumptions or exposure risk, just provable confidentiality at every step.

Get started with confidential containers on Red Hat OpenShift today

Your most sensitive workloads, whether running on-premise on bare metal or in confidential GPU-accelerated AI pipelines, deserve hardware-backed protection that keeps data encrypted and verifiably isolated even from infrastructure administrators.

Get started through the Red Hat Hybrid Cloud Console and begin protecting your most sensitive workloads with Red Hat OpenShift and confidential containers today.

DocumentationOpenShift sandboxed containers documentation

Try itRed Hat Hybrid Cloud Console

 


Related blog series

A blog series on Confidential Containers

产品试用

红帽 OpenShift 容器平台 | 产品试用

为构建和扩展容器化应用提供一致的混合云基础。

关于作者

Marcos Entenza, a.k.a Mak, works on the core Red Hat OpenShift Container Platform for hybrid and multi-cloud environments to enable customers to run Red Hat OpenShift anywhere. Mak is an experienced Product Manager passionate about building scalable infrastructures and he oversees installation, provider integration, and confidential computing on OpenShift.

Jens Freimann is a Software Engineering Manager at Red Hat with a focus on OpenShift sandboxed containers and Confidential Containers. He has been with Red Hat for more than six years, during which he has made contributions to low-level virtualization features in QEMU, KVM and virtio(-net). Freimann is passionate about Confidential Computing and has a keen interest in helping organizations implement the technology. Freimann has over 15 years of experience in the tech industry and has held various technical roles throughout his career.

Renjish Kumar is the product owner of OpenShift sandboxed containers at Red Hat and brings a blend of technology and business experience, with over 26 years working with some of the leading global system integrators, product vendors, early-stage startups and research institutes across geographies and industry sectors. His focus, during the last 10+ years, has been in accelerating customers’ digital transformation journeys through the adoption of open source, cloud native platforms & security and AI.

UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Virtualization icon

虚拟化

适用于您的本地或跨云工作负载的企业虚拟化的未来