Trust is something we encounter every day in many different contexts, whether it’s with people, institutions or products. With trust comes vulnerability–an especially uncomfortable concept for those of us primarily concerned with security. No one wants their systems to be vulnerable, but if you really want to understand the security posture of your system, you need to understand what you are trusting and how it could expose you.
What is trust?
Zero trust is a term that’s getting a lot of buzz, but it can be a bit of a misnomer. It's not so much zero trust, but zero implicit trust. Nothing should be trusted simply due to its location on the network or claims from the developer, which are certainly critical in today’s heterogeneous and hybrid cloud computing environments. Instead, all interactions must be verified and all access to data must be authenticated and authorized, resulting in explicit trust. Interactions usually refer to services and users, but can also include how a system is initially designed.
For example, the mathematical equations behind encryption algorithms have been verified and proven over time by multiple third parties–not simply because a developer vouched for them. Each software stack doesn’t have to do its own mathematical proof each time the algorithms are used, and we trust those components because that trust has been made explicit. Due to this explicit trust, we can extend it upward and outward into the rest of the stack and architecture and use those algorithms to:
- Create transport layer security (TLS) connections between services to encrypt data as it flows across the internal network
- Encrypt data as it resides on disk
- Use cryptographic digests and hash-based message authentication code (HMAC) to create challenge/response systems
It’s also important to note that trust is not always permanent. As technology evolves and new exploits are found, an algorithm (or implementation) might be considered trustworthy one day, and lose that trust the next. For instance, several cryptographic algorithms that were once considered unbreakable are now known to be flawed (DES, MD4, MD5, SHA1, etc.). Knowing your explicit trust roots can help to mitigate any potential harm by knowing what to replace when that trust is lost.
Roots of trust
When we’ve anchored our trust in something solid, we’re able to build more complicated relationships between services that don’t need to rely on trust. This low-level component that we use to build trust between other components is called a root of trust.
Every system has roots of trust, but many times they go unacknowledged. A securely designed system needs to be explicit in its roots of trust or else there is a risk of the roots being vulnerable due to oversight. In today's world of cloud computing, hybrid cloud environments and edge computing, you won't always be able to control the physical security of your systems. Roots of trust need to be hardened against physical and environmental tampering as well as systems that attack your code. Some people may have total implicit trust in their cloud provider and their staff, but the more security conscious among us should make that trust based on explicit evidence.
A large modern software system should have several roots of trust, including encryption algorithms, secret management systems and TLS certificate authorities. Security-sensitive use cases benefit from hardware-based roots of trust, ideally with remote attestation, because these can provide more robust tamper resistance and tamper evidence compared to software solutions. Software’s malleability is one of its primary strengths, but it’s a poor trait for security, especially as a root of trust.
Trusted Platform Modules
One approach used to extend trust up through a software stack and help protect it against physical and virtual threats is with a Trusted Platform Module (TPM)—a cryptographic sub-processor, which is usually hardware but can be virtualized, that is designed to provide certain cryptographic guarantees while being resistant to physical tampering. TPMs are fairly ubiquitous, as they’re present in many phones, routers, servers, laptops and even cloud computing offerings. A given TPM can be tied back to its manufacturer via a certificate chain, proving it's an authentic device (as long as the manufacturer protects their private certificates), while also containing an encryption key that's unique to this particular TPM. This certificate chain and the cryptographic functions allow it to be used to enhance the security of a given system with things such as disk encryption, measured boot and file integrity measurements. These cryptographic guarantees can be proven remotely in real-time using a tool like Keylime or even preserved for later verification with durable attestation.
A system using a TPM as a root of trust can make cryptographic guarantees about its state that other systems can build on. For instance, because we can make hardware-backed assertions about the state of a given system–that it hasn't been tampered with at boot or run time–we can tie its authentication and authorization to those guarantees before it tries to access sensitive information. Now it doesn't matter that we don't physically control the resource, because as long as the TPM is as secure as possible, we can have greater confidence that the layers we build on top of it also have a high degree of security.
Trusted Execution Enclaves
While a TPM is a dedicated chip that offers specific, targeted functionality as a hardware root of trust, a Trusted Execution Environment (TEE) takes a completely different approach. Central processing units (CPUs) that have TEE capabilities contribute to higher integrity guarantees for data and code, and higher confidentiality guarantees for data, most often for a specific area of system memory used for general purpose computation. When properly implemented, this is known as confidential computing, which protects data in use from unauthorized access, including access from more privileged levels of the stack like the hypervisor or operating system and protects code and data from tampering. This can improve the security posture for the applications or workloads running in the confidential environment, as it removes implicit trust from these lower stack levels.
Per the Confidential Computing Consortium, a TEE must be hardware-based and attestable for the computation therein to be considered confidential computing. Thus, not only is a hardware root of trust required, but attestation is a critical piece of the security guarantees provided by any TEE. This is because the attestation carries the verifiable information that allows trust decisions to be formed about the TEE.
While the format and content of a TEE’s attestation can vary based on implementation, the attestation ideally establishes a chain of trust from the hardware root–in this case, the CPU and its hardware keys–to both the TEE running on the CPU and to the CPU manufacturer. As in the case of the TPM, the TEE’s CPU should be able to be linked to its manufacturer via a certificate chain to prove authenticity. As well, signatures from the CPU’s hardware keys should be traceable, usually via intermediary keys, to the TEE instance, indicating that the TEE is running as expected. Each link, or signature, in the chain of trust from the manufacturer through the hardware root and to the TEE is auditable and verifiable in a well-formed TEE attestation, eliminating the need for implicit trust in all but the hardware root.
Software supply chains
Using a hardware root of trust is not just about protecting a running system. These same methods can be applied to software supply chain security to reduce the chances that the pipeline building your components has been compromised. The Supply-chain Levels for Software Artifacts (SLSA) is a specification framework to describe the maturity of a software supply chain. SLSA level 3 requires non-falsifiable provenance, meaning there must be a cryptographic chain to tie that build back to a specific source that was known to be trustworthy at the time of the build. While the specification does not require the root of trust to be based in hardware, choosing to do so provides stronger security guarantees. A compelling use case would be to use a TPM and an attestation service like Keylime to tie the machine's boot and file integrity attestations into the chain of records that can be created for each step of the artifact's build with a provenance generation tool. This could be used in conjunction with current open source software supply chain signing and verification tools like those provided by Sigstore.
As SLSA adoption increases, especially in the open source ecosystem, developers will be able to convert the implicit trust that many people put into their consumption of packaged open source components into an explicit trust chain that can be independently verified. This can provide protection from several forms of attack, such as package hijacking, name typo squatting and Solar Winds-style infiltrations.
Understanding where we place our trust in a system helps us understand where we are most vulnerable. Using low-level hardware devices to create roots of trust to build up trust throughout the system's software is a great way of increasing the protection of systems even when our physical access is limited. Hardware roots of trust are an under-utilized tool that every system architect should be thinking about.
As organizations move to improve their zero trust security posture, hopefully they understand that there should be nothing that is trusted implicitly when it comes to interactions with their systems. Wherever possible, they should look for the option to verify claims themselves, even when the claims seem to come from a trusted vendor or component.
About the authors
Michael Peters is a Principal Engineer in Emerging Technologies in Red Hat's Office of the CTO. He is a senior systems engineer and programmer with an emphasis on DevOps, Security, and Operability and is one of the current maintainers of the Keylime project. His experience in both startups and large tech companies has given him a passion for shifting security to the left and making it easier to understand and use.
Lily is a senior software engineer in Red Hat's Emerging Technologies Security team. She has primarily worked on projects related to remote attestation and confidential computing, and more recently on securing the software supply chain. Her favorite language is Rust.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit