Kubernetes illustration
Jump to section

Advantages of Kubernetes-native security

Copy URL

There are 2 main approaches to container security: container-centric and Kubernetes-native. 

Container-centric platforms operate at the container level, focused on securing container images and the container runtime. These tools provide controls at the container level itself, using techniques such as inline proxies or shims to control cross-container communications, for example.

Kubernetes-native security operates at the Kubernetes layer. It derives context from Kubernetes and pushes policies into Kubernetes for Kubernetes to enforce.

Kubernetes-native security relies on deep integrations with Kubernetes to pull in context and tap into the native controls of Kubernetes. This architecture improves security in 2 key ways by providing rich context and insights and detecting Kubernetes-specific threats.

Kubernetes-native security

Cloud-native technologies create new security challenges, as well as opportunities to enhance existing security strategies.

Kubernetes-native security is based on the principle that security is implemented most effectively when it is aligned with the system managing containerized applications. 

A security platform must exhibit the following characteristics to be considered Kubernetes-native:

  • Directly integrate with the Kubernetes API server to gain firsthand visibility into Kubernetes workloads and infrastructure
  • Assess vulnerabilities in Kubernetes software itself
  • Base its security functionality, including policy management, on resources within the Kubernetes object model, including deployments, namespaces, services, pods, and others
  • Analyze declarative data from Kubernetes-specific artifacts (e.g., workload manifests) and configurations
  • Use built-in Kubernetes security features to handle enforcement whenever possible for greater automation, scalability, and reliability
  • Deploy and run as a Kubernetes application, including integrations and support for common tools in cloud-native toolchains

Kubernetes-native security provides visibility into the configuration not just of your containers but also your Kubernetes deployment. 

It’s also important to understand how, or if, your workloads are isolated. Kubernetes, by default, allows all deployments to talk to all other deployments, within and beyond their namespaces. Deep visibility into the network policy settings, preferably in a visual format vs. reading the text of a YAML file, will highlight which workloads are not isolated.

To understand your overall security posture, you must ensure that Kubernetes configurations such as role permissions, access to secrets, allowed network traffic, and the settings on the control plane components are locked down, follow best practices, and are scoped to the least-possible privileges needed for your applications to run.

Just as with other compute resources, many organizations choose to run Kubernetes in the cloud. You have multiple options for how to run Kubernetes in the cloud:

  • Self-managed Kubernetes
  • Commercial distribution of Kubernetes
  • Managed Kubernetes service

Whichever model you choose, you and the cloud provider "share" responsibility for securing the deployment. While the typical shared responsibility model applies with Kubernetes—especially with managed Kubernetes services—where the line falls in terms of responsibility for security can sometimes feel confusing.

With managed Kubernetes services, the cloud provider manages the Kubernetes control plane, which includes the Kubernetes components that control the cluster, along with data about the cluster’s state and configuration.

Services typically include setting up the control plane, enabling redundancy of those nodes, often including running them in different regions to prevent an outage should part of the cloud provider’s infrastructure go down.

Typically, the cloud providers:

  • Will keep Kubernetes up to date
  • Will keep the control plane patched
  • May provide patching of node OS—often depends on your choice of OS
  • Will often offer container-optimized OS images for nodes
  • Will sometimes include vulnerability scanners, but you must create the policy, such as using admission controller to allow/deny based on scanning results

The customer is always responsible for securing the Kubernetes workload, including these security aspects:

  • Container images: their source, contents, and vulnerabilities
  • Deployments: network services, storage, privileges
  • Configuration management: roles, groups, role bindings, service accounts
  • Application: secrets management, labels, annotations
  • Network segmentation: network policies in the cluster
  • Runtime: threat detection and incident response

Kubernetes-native security platforms provide several key advantages. 

Increased protection 

Kubernetes-native security provides richer insights by tying into Kubernetes’ declarative data to discover vulnerabilities in Kubernetes as well as containers. 

Greater operational efficiency 

Using the same framework for infrastructure management and security lowers the Kubernetes learning curve, and Kubernetes context enables faster threat detection and prioritized risk assessments.

Reduced operational risk 

Tapping into the native controls of Kubernetes ensures security has the pace and scalability of Kubernetes. Having policies embedded in Kubernetes means there’s no conflict between external controls and the orchestrator.

Kubernetes-native security helps reduce operational issues that stem from inconsistent configurations, lack of coordination, and user errors.

Given the learning curve most users are on with Kubernetes, it’s easy to make mistakes, including granting elevated privileges using Kubernetes Role-based Access Controls (RBAC), such as giving a user or service account full cluster administrative permissions, or unnecessarily exposing Kubernetes secrets by enabling deployments to pull secrets even when they aren’t needed.

Kubernetes-native security platforms can identify these misconfigurations automatically and continuously.

Embedding security controls directly in Kubernetes also removes the risk of having separate control software that, in the event of a failure, would either fail open and allow all traffic with no security enabled, or fail closed and break all application traffic.

Having the Kubernetes orchestrator enforce policy controls means security immediately gains all the scalability of Kubernetes itself as well as the range of policy enforcement options it includes. 

In contrast, using inline proxies or shims for enforcement introduces single points of failure, scalability challenges, and performance limitations.

With Kubernetes, you can, for example, apply network policies to segment your traffic, use admission controllers to apply policies to requests going to the Kubernetes API server, use secrets for storing sensitive credentials, and apply Role-based Access Control (RBAC) to authorize certain capabilities for certain users and service accounts.

You can also use additional standardized tools, such as network plugins that adhere to the Container Network Interface (CNI) in conjunction with the Kubernetes-native security platform and change those additional tools as needed.

By providing a single, unified platform for provisioning and operating infrastructure services, Kubernetes streamlines and unifies workflows across application development and operations teams. 

That same consolidated approach, where everyone is working off a common source of truth and using the same infrastructure, can extend to security as well when you deploy a Kubernetes-native security platform. 

This approach saves time and money by shortening the learning curve and enabling faster analysis and remediation.

When DevOps and security teams are using different tools, it’s easy for conflicts to arise in how they’re configured. 

DevOps may specify a Kubernetes network policy allowing traffic between 2 nodes, and security could introduce a control via separate control software that blocks that traffic.

Looking at the settings in Kubernetes would show DevOps that the application should be working with traffic flowing, and they could easily have no idea why the app is failing, because they cannot see the controls exerted by the separate control software.

When DevOps and security teams are using the same constructs to build and ship containerized apps as well as to secure them, they have fewer interfaces, tools, and models to learn. 

DevOps uses Kubernetes manifest files to define the resources a given application needs. Using those same assets to glean security context and apply policies reduces complexity and improves the security outcome. 

Kubernetes-native security treats Kubernetes as the source of truth for security policies, and everyone—security, operations teams, DevOps, and site reliability engineering (SRE) teams—will all be working off that same source of truth. 

In addition, security issues map directly to the Kubernetes objects and resources these teams use daily, further simplifying operations.

Avoid the operational risk that comes with implementing separate security software by using Kubernetes-native enforcement for your security policies.

Containers complicate security on a number of fronts for cloud-native applications, including the fact that incidents can be very spread out, containers produce high volumes of data to process, and they’re ephemeral, which renders traditional incident response obsolete.

Kubernetes-native security enables you to detect threats to your containers more accurately and reduces the time and effort you need to effectively apply security in your environment.

With Kubernetes context, the expected behavior is clear. As a result, Kubernetes-native security can identify anomalies with higher fidelity, and you can apply enforcement options, such as killing a pod, with more confidence.

At the same time, using Kubernetes context also reduces the false positives and alert fatigue.

Kubernetes-native security also provides the ability to take a risk-based approach to security tasks. 

Your deployments are likely to contain a number of policy violations, but where do you start? Again, tapping Kubernetes context helps. 

Bringing together different aspects of the Kubernetes metadata, including whether a cluster is in development or production, whether it’s exposed to the Internet or not, how critical the application is, and whether any suspicious processes are currently running on it, will show you what needs your team’s attention right now. 

Detecting Kubernetes-specific vulnerabilities, especially any that put the Kubernetes API server at risk, are especially crucial to prevent, identify, and remediate. Kubernetes-native security tooling can automatically identify these vulnerabilities.

Integrating with the Kubernetes API server provides security monitoring for both the containers running in Kubernetes clusters as well as Kubernetes resources such as deployments, daemon sets, services, pods, and other resources.

The wide-open nature of Kubernetes deployments presents another threat vector. Because Kubernetes is first and foremost a platform for infrastructure operations, all components are not necessarily secure by default for operational ease of use. 

Applying Kubernetes network policies to limit communications is another critical element in securing your Kubernetes deployments. Kubernetes-native security platforms can automatically baseline your network activity, identify which communications paths are needed to service your application, and create the correct YAML file to reduce network access scope.

With the automatic security settings in a Kubernetes-native platform, you’ll be able to continuously identify and stop threats at the Kubernetes layer.


Kubernetes-native security also enables high portability and re-use. Following a single, standardized approach that runs everywhere Kubernetes runs ensures that policies are applied consistently, across all environments. 

Kubernetes-native security lets users specify a single configuration, such as a network policy, that should apply to all pods in a deployment, rather than having to configure system-level controls on every host in a cluster. 

By tying policies into CI/CD systems and the Kubernetes admission controller framework, organizations can more easily apply control policies early in the software development life cycle, preventing exposures at runtime. 

And tapping Kubernetes constructs such as the admission controller keeps security tied deeply into Kubernetes toolchains.

Keep reading


What's a Linux container?

A Linux container is a set of processes isolated from the system, running from a distinct image that provides all the files necessary to support the processes.


Containers vs VMs

Linux containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.


What is container orchestration?

Container orchestration automates the deployment, management, scaling, and networking of containers.

More about containers


An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.



Command Line Heroes Season 1, Episode 5:
"The Containers Derby"


Boost agility with hybrid cloud and containers


Free training course

Running Containers with Red Hat Technical Overview

Free training course

Containers, Kubernetes and Red Hat OpenShift Technical Overview

Free training course

Developing Cloud-Native Applications with Microservices Architectures