What is zero trust?

Copy URL

Zero trust is an approach to designing security architectures based on the premise that every interaction begins in an untrusted state. This contrasts with traditional architectures which may determine trustworthiness based on whether communication starts inside a firewall. More specifically, zero trust attempts to close gaps in security architectures that rely on implicit trust models and one-time authentication.

Zero trust has gained popularity because the global threat landscape has evolved, challenging long held assumptions about the inherent trustworthiness of activities inside a network. Well-organized cybercriminals can recruit insiders, increasing the opportunity for insider threats, and continue to find new ways past the outer shell of traditional security architectures. Sophisticated hacking tools and commercialized ransomware-as-a-service platforms have also become more widely available, making it easier for new kinds of financially-motivated criminals and cyber terrorists to operate. All of these threats have the potential to exfiltrate valuable data, disrupt business and commerce, and impact human life.

Given this new threat landscape, The United States Federal Government is under Executive Order to advance toward a zero trust architecture, and many enterprises are weighing the costs and benefits of adopting this approach.

Build a foundation for zero trust with Linux

In a well-known 2010 Forrester Research report on zero trust, John Kindervag called for the common "trust but verify" approach to network security to be adapted into a "verify and never trust" strategy. Kindervag challenged the prevailing motto: "We want[ed] our network to be like an M&M, with a hard crunchy outside and a soft chewy center." For decades, enterprises had been designed this way, with a trusted or internal network (the chewy center) separated from the external world by a perimeter of firewalls and other security defenses (the crunchy outside). Individuals or endpoints within the perimeter, or connected via remote methods, got a higher level of trust than those outside the perimeter.

This "hard shell, soft center" approach to security design was arguably never ideal, but continues to persist today. These architectures make it easy to traverse the internal network once inside, with users, devices, data, and other resources minimally separated. Cyberattacks take advantage of this design by first gaining access to one or more internal endpoints or other assets before moving laterally across the network, exploiting weaknesses, exfiltrating controlled information, and launching further attacks.

In addition to its susceptibility to sophisticated cyber attacks, this insufficient architecture becomes strained as networks expand to include a vast number of endpoints, with users requiring remote access from more locations and to more assets with finer-grain services. The issue of trust has drawn additional attention since the COVID-19 pandemic, as workforces have become increasingly remote and workloads in cloud environments continue to grow.

To manage the vulnerabilities of this environment, enterprises are transitioning from virtual private networks (VPNs)—which permit secure access to an entire network—to a more granular Zero Trust Network Access (ZTNA), which segments access and limits user permissions to specific applications and services. This microsegmentation approach can help limit attackers’ lateral movement, reduce attack surfaces, and contain the impact of data breaches, but adopting a zero trust model requires organizations to apply a "verify and never trust" philosophy in every aspect of their security architecture.

Red Hat resources

The foundations of zero trust security are de-perimeterization and least privilege access, which protect sensitive data, assets, and services from vulnerabilities inherent in network perimeter and implicit trust architectures.

De-perimeterization: Enterprises are no longer defined by geographic perimeters. Users operate from a variety of locations and endpoints, accessing resources from one or more operational environments, including cloud and Software-as-a-Service (SaaS) solutions, often not owned or controlled by the enterprise IT organization. De-perimeterization addresses this decoupling of trust from location.

Least privilege: When interactions cannot inherit trust based on name or location, every interaction is suspect. Deciding whether to allow any interaction becomes a business decision that must take into account the benefits and risks of doing so. Least privilege refers to the practice of restricting access to only those resources absolutely necessary—i.e. the "least" privileges necessary for an activity. Each request for access to a resource needs to be dynamically validated using identity management and risk-based, context-aware access controls.

Implementing a zero trust architecture does not require a comprehensive replacement of existing networks or a massive acquisition of new technologies. Instead, the framework should strengthen other existing security practices and tools. Many organizations already have the necessary foundation for a zero trust architecture and follow practices that support it in their day-to-day operations.

For instance, these critical components needed for successful adoption of a zero trust strategy may already be present as part of a conventional security architecture:

  • identity and access management

  • authorization

  • automated policy decisions

  • ensuring resources are patched

  • continuous monitoring with transactions that are logged and analyzed

  • repeatable activities that are prone to human errors automated as much as possible

  • behavioral analytics and threat intelligence used to improve asset security

In fact, zero trust is already being practiced today at various scales and against a wide array of environments. Its core tenants primarily require the application of existing security practices, and organizational and process controls. Federal organizations like the US Department of Defense, Homeland Security, and the Intelligence Community—where security is a central cultural pillar—have already made significant progress toward implementing a zero trust security model.

Get an e-book about zero trust automation

To meet business demands and accelerate digital transformation efforts, many organizations rely on open source components and third-party tools to develop software. However, bad actors seeking to infiltrate the software supply chain can compromise the security of open source components and dependencies early in the development lifecycle, leading to cyber attacks and delayed application releases. A zero trust approach is critical to securing the software supply chain and ensuring that issues are detected early on when they are less expensive to fix.

Organizations can minimize the risk of supply chain attacks by using secure open source code, building security into container images, strengthening the CI/CD pipeline, and monitoring applications at runtime.

Code, build, and monitor with Red Hat® Trusted Software Supply Chain

Zero trust as a security model is often described in abstract terms, as opposed to more formalized controlled access models such as Bell-LaPadula. There are different sets of components espoused by different groups or standard bodies. A typical set of components might be:

  • Single strong source of identity for users and non-person entities (NPEs)

  • User and machine authentication

  • Additional context, such as policy compliance and device health

  • Authorization policies to access an application or resource

  • Access control policies within an app

These components are largely focused on how to implement identity based access policies with a default "deny-all" and "allow-by-exception."

Trust boundary

A trust boundary is any logical separation between components where the subjects participating in an interaction change their trust status, typically between the two states of "trusted" and "untrusted." Generally the transition from untrusted to trusted requires two things:

  • authentication: verification and/or validation of the identity of the subjects.
  • authorization: verification and/or validation of the right to and need to access an asset (data, systems, or other).

In order to adhere to zero trust principles, trust boundaries must be kept as small as possible—by definition, within the boundary, subjects are trusted and access controls may be omitted, bypassed, or otherwise limited. Since the authorization should be for only specific business functions, any boundary that allows access to other functions should be narrowed.

Not all security boundaries in a system architecture need to fit the criteria of a proper Zero Trust boundary. These ordinary boundaries—such as filtering unwanted IP addresses, allowing certain protocols to access a network, or limiting social media use—can overlap zero trust and still play a role in security strategy. The critical difference in adopting Zero Trust, however, is that ordinary boundaries are not part of calculating trust, as they might have in traditional network architectures. Only boundaries that meet zero trust principles should play a role in calculating trust.

Zero trust requires the separation between distinct subjects to always be maintained: that is to say, there is always a trust boundary between any two subjects and thus every interaction requires multi-factor authentication (MFA) and direct authorization. There is no implicit trust by virtue of two subjects being on the same network (a very common scenario), nor being available in the same physical location, nor part of the same line of business or integrated system.

A zero trust security model works by enforcing these trust boundaries. Typically this is done by interposing an enforcement point between all potential interactions with all resources. As these interactions change over time, so do the identities, resource states, and other aspects of a system. This continuous change requires an equally ongoing assessment and monitoring of identities and resources, and adaptive enforcement of authentication and authorization.

There are still many areas where these basics are simply too constrained to implement. Continued use of legacy technology, immature business processes, or the low priority of security as an essential business function are all common challenges.

Zero trust often requires a change of mindset for both leadership and security professionals. Leaders need to evaluate the risk of maintaining existing, outdated security architectures. IT and operational technology (OT) professionals need to recognize where they can take advantage of existing investments to reduce the cost to implement Zero Trust, and where to prioritize new investments. However, it is a reality that some protocols and devices will never be Zero Trust, in which case decisions have to be made whether to replace or maintain them. Moreover, if certain systems can’t fully embrace a Zero Trust approach, OT professionals must ask themselves what the mitigating controls are, and whether alternative security controls can be applied to further reduce their exposure.

The move to "deny by default" or "always verify," the basic premise of zero trust, requires commitment from teams to both implement and maintain over time, while ensuring no part of the organization attempts to work around the zero trust security architecture by creating "shadow IT" offerings.

Red Hat can help get you started with zero trust adoption. Initially, enterprises must understand and be committed to implementing zero trust. General cybersecurity awareness is an important prerequisite in order to get stakeholders to proceed—to understand the nature of the current threat environments and how existing security practices are important and yet incomplete without following zero trust principles.

Built with comprehensive security in mind, tools like Red Hat Service Interconnect protect networks and routers from external access, preventing security risks such as lateral attacks, malware infestations, and data exfiltration.

Red Hat also offers options for security training and education through customized experiences delivered through Red Hat Open Innovation Labs or other specialized Red Hat Services engagements.

Hub

The official Red Hat blog

Get the latest information about our ecosystem of customers, partners, and communities.

All Red Hat product trials

Our no-cost product trials help you gain hands-on experience, prepare for a certification, or assess if a product is right for your organization.

Keep reading

What is a CVE?

CVE, short for Common Vulnerabilities and Exposures, is a list of publicly disclosed computer security flaws.

What is secrets management?

Secrets management is a method for ensuring that the sensitive information needed to run your day to day operations is kept confidential.

What is role-based access control (RBAC)?

Role-based access control is a method of managing user access to systems, networks, or resources based on their role within a team or a larger organization.

Security resources

Related articles