In this final entry for the container security series, we'll look at network traffic control for containers running in Red Hat OpenShift. 

In a Multi-Level Security (MLS) environment, you will want to be able to ensure that containers in different security levels can only talk to pods in the same security level. For instance, a Top Secret pod should only talk to other Top Secret pods (and perhaps, only a subset of them). Red Hat OpenShift has a variety of mechanisms to control pod to pod access, and to control what networks a pod can be attached to a pod. 

NetworkPolicy to Control Internal Cluster Traffic

The default OpenShift SDN (openshift-sdn), supports the Kubernetes NetworkPolicy objects (for incoming traffic). NetworkPolicy works by setting any matching pod to have a deny policy, and then the NetworkPolicy rules then allow traffic. Each policy object has three pieces:

  1. The pods that the rule applies to (i.e. the pods that will receive the traffic).

  2. What ports and IP protocols are allowed.

  3. What to accept traffic from.

The third part, what to accept traffic from, allows you to specify pods, by label, in the same project. Or you can specify a label on a project that would allow any pod in the project to send traffic.

So, for a common use-case, you may have a server providing REST access to an internal database. You want to allow any pod in any Secret project to be able to talk to the REST server, but you only want the REST server to be able to talk to the DB. With network policy, as the project administrator, you would add the following:

  • A policy that matches all pods with no rules, so that it sets the default deny.

  • A policy that matches the REST servers for TCP on port 443, and allows any project with the “MLSLevel: Secret” label to talk to them.

  • A policy that matches the DB server for TCP on port 3306, and allows any pod with the label “app: RESTServer” to talk to it.

With this setup, the only internal cluster traffic that could reach the project would be from other pods in the Secret project that was talking to the REST server (either by pod IP, or service).  The only traffic that could reach the DB, would be internal to the project from the REST server.

However, we do not yet support egress NetworkPolicy, so there is no explicit way to say that traffic originating in pods in a given project can only talk to other Secret projects. You can get the same effect by having appropriate NetworkPolicy objects in place on the other projects to forbid the traffic. It would probably be wise to have an automated controller in charge of setting up, and monitoring the NetworkPolicy objects in the projects to ensure that they only allow traffic from appropriately labelled projects.

Use the Egress Firewall to Control Traffic Leaving the Cluster

Any traffic that leaves the cluster is not controlled by NetworkPolicy, so, by default, any pod in the cluster can talk to anything that the node the pod is running on can talk to. Obviously, in a security-sensitive environment you need to control this. 

In Red Hat OpenShift we added a feature to the openshift-sdn called “Egress Firewall” that allows the cluster administrator to set rules up in each project to create a list of allow or deny rules that are evaluated in order until one matches. The rules can either match the target IP address by CIDR or by DNS name (that we resolve to a set of IP addresses, and monitor the name for changes).

In an MLS environment, you would need to establish a default deny for all projects, and then on a case-by-case basis, relax access to external resources if necessary.

Identifying the Source of Traffic (From Outside the Cluster)

When traffic leaves the SDN, by default, the source IP address is that of the node that the pod is running on. Obviously, this makes it hard to have external firewalls control the traffic, or to be able to audit the traffic once it leaves the cluster. In order to get around this, in Red Hat OpenShift, the openshift-sdn SDN allows you to set up egress IP addresses for a project. This feature allows you to set up a project so that any traffic leaving the cluster will originate with one of the IP addresses assigned to the project.

Attaching Other Networks to a Pod

If you have multiple external networks, with different security levels, you can choose to allow access to those networks directly from pods using the Multus. Multus allows the cluster administrator to define network attachments that a project administrator can then use in pod definitions. There are plugins that allow you to put specific host interfaces into a pod.  Or you can specify that an interface with a VLAN tag be added to the pod.

The cluster administrator sets up the allowed network attachments and associates them with projects. So the administrator could set up bridge interfaces to the Secret network and associate them with Secret projects, and set up bridge interfaces to the Top Secret network and associate them with Top Secret projects.

Service Mesh

Red Hat OpenShift supports the Istio service mesh that runs on top of the SDN and can have higher level (and more fine grained) control of traffic in the cluster.  So you can have rules that restrict traffic based on http host or path (among other things).

This was the final post of a series on how SELinux and other container technologies can increase security in your environment. The entire series is collected here, and be sure to watch the Red Hat Blog for more about Linux containers.


Über die Autoren

Daniel Walsh has worked in the computer security field for over 30 years. Dan is a Senior Distinguished Engineer at Red Hat. He joined Red Hat in August 2001. Dan leads the Red Hat Container Engineering team since August 2013, but has been working on container technology for several years. 

Read full bio

Lukas Vrabec is a Senior Software engineer & SELinux technology evangelist at Red Hat. He is part of Security Controls team working on SELinux projects focusing especially on security policies. Lukas is author of udica, the tool for generating custom SELinux profiles for containers and currently maintains the selinux-policy packages for Fedora and Red Hat Enterprise Linux distributions. 

Read full bio

Simon Sekidde is a Solution Architect for the North America Red Hat Public Sector team specializing in the application of open source enterprise technologies for the Federal Department of Defense (DoD) customers.

Read full bio

Ben Bennett is a Senior Principal Software Engineer and is the group lead for the SDN, Routing, DNS, and Storage components of Red Hat OpenShift.  He has more than 25 years of experience working with networking, distributed systems, and Linux.

Read full bio