Software-defined networking (SDN) is a dynamic, manageable, cost-effective, and adaptable networking technology suitable for the high-bandwidth, dynamic nature of today’s applications. By using an SDN architecture, an IT operations team can control network traffic in complex networking topologies through a centralized panel, rather than handling each network device, such as routers and switches, manually.
Rapidly growing mobile content, server virtualization, and hybrid cloud services are some of the trends leading the networking industry to reconsider network architectures. The traditional networking architecture is built mainly on multiple layers of network switches in a hierarchical topology. But it’s harder to address rapidly increasing application workloads from multiple and hybrid infrastructures (like the cloud) in a hierarchical architecture.
Many companies are aggressively adopting Linux containers because they are a collection of isolated processes that can handle unexpected increasing application workloads on top of the Linux operating system. A Linux container is a sort of binary image file that contains an application, a runtime, and dependent libraries, so it is both portable and consistent when moving from development to testing and ultimately to production. In the end, Linux containers enable IT ops teams to build the application-delivery pipeline quicker and easier than traditional application delivery environments.
At the same time, enterprises have serious concerns about how they can isolate security and networking for multiple application containers across many different data centers and cloud services. Since version 1.10, Docker container has offered secure computing mode profiles to drop privileges; the default settings are not bad, but they still leave 270 system calls that a container could execute. For example, if CAP_NET_ADMIN is enabled in a container in order to add routes to its routing table, there will be many ways to exploit the system if you search long enough.
Isolating Linux containers with SDN
Instead, you can use SDN to handle the network isolation of Linux containers.
The Container Network Interface (CNI) is a library definition and a set of tools to configure network interfaces in Linux containers through many supported plugins. The CNI project is under the umbrella of the Cloud Native Computing Foundation (CNCF). Multiple plugins can run at the same time in a container that participates in a network driven by different plugins. Networks are written in JSON format in the configuration file and instantiated as new namespaces when the CNI plugin is invoked.
A lot of popular container runtimes, such as Kubernetes, OpenShift, Cloud Foundry, and Apache Mesos, are using the CNI to define a common interface between the network plugins and container execution for application containers on Linux. There are many ways to implement the CNI, and the following nine CNI plugins (presented in alphabetical order) are often used to achieve networking capabilities on Kubernetes.
Calico provides high scalability on distributed architectures such as Kubernetes, Docker, and OpenStack.
Cilium provides network connectivity and load balancing between application workloads, such as application containers and processes, and ensures transparent security.
Contiv integrates containers, virtualization, and physical servers based on the container network using a single networking fabric.
Contrail provides overlay networking for multi-cloud and hybrid cloud through network policy enforcement.
Flannel makes it easier for developers to configure a Layer 3 network fabric for Kubernetes.
Open vSwitch (OVS) offers a production-grade CNI platform with a standard management interface on OpenShift and OpenStack.
OVN-Kubernetes enables virtual networks for multiple containers on different hosts using an overlay function.
Romana makes cloud network functions less expensive to build, easier to operate, and better performing than traditional cloud networks.
Some Linux vendors provide functions and components of network isolation for Linux containers. For example, Red Hat Enterprise Linux (RHEL) provides network namespaces that allow a container to use a separate virtual network stack, loopback device, and process space. You can add virtual or real devices to the container, assign them IP addresses, and even use full iptables rules.
In addition to network namespaces, an SDN should increase security by offering isolation between multiple namespaces with the multi-tenant plugin. This means packets from one namespace, by default, will not be visible to other namespaces, so containers from different namespaces cannot send packets to or receive packets from pods and services of a different namespace. These capabilities are useful for isolating developer, test, and production networks.
Linux containers are the most popular technology used to develop, deploy, and manage enterprise applications in the hybrid cloud. Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside their control. Luckily, many open source CNI projects are evolving or consolidating, because enterprise developers need to eliminate manual network provisioning in containerized environments and network engineers are ready (barring those who have misconceptions about job security). Select one of the CNI plugins listed above for a quicker, easier, more convenient approach to dynamic port allocation, network isolation, and security.