フィードを購読する

Here's how Multus can unlock the potential of multiple network interfaces and CNI plugins for your containers, with a Shakespearean twist. 

The curtain lifts...

Shakespeare understood the power of the soliloquy; having that one actor on stage pour his or her heart out for only the audience to hear. While this solo act is a powerful literary tool, Shakespeare also understood that the best plays have an interesting cast of characters that help perform the action. 

Until recently, containers and pods were a bit like that lonely character, center stage. By default in Kubernetes, a pod is exposed only to a single interface (and loopback) as assigned by your pod networking. While this approach—like a soliloquy—is simple, yet powerful, it can also be restrictive for more complex scenarios, like those that involve specific hardware or topology support, monitoring, multi-tenant networks, or situations that require the separation of data or control planes. One option to address this issue is to generate pseudo-interfaces and attach them to the pod interface, allowing for the configuration of virtual bridges and multiplexing. While pseudo-interfaces can provide “just enough” functionality, they might not be the best actor for the job.

Enter Multus, stage left. 

Act 1 Scene 1: Introducing our hero, Multus

On the surface, Multus is a multi-container networking interface (CNI) plug-in designed to support the multi-networking feature in Kubernetes using Custom Resources Definition (CRD)-based network objects. CRD itself is a way to extend the Kubernetes application programming interface (API). 

Multus enables pods, which have been patiently waiting in the wings to join the action, to escape from ethernet0 isolation and to not only have multiple network interface connections, but also use advanced networking functions—like port mirroring and bandwidth capping—attached to those interfaces.

There are other techniques one can employ to achieve some of the functionality that Multus provides. But those methods, like configuring pseudo-interfaces to support virtual bridging and multiplexing, are bit players that lack the sophistication, oomph, and flexibility of Multus.

The primary narrative with Multus is that it can provide as many network interfaces as you need. That’s right; at this time, there is no known upper limit for the number of network interfaces. Additionally, there is no performance hit difference between having 10 or 100 network interfaces. 

But sometimes the most interesting part of the story is the subplot. One of Multus’s most exciting features is that it enables you to use CNI plug-in chaining within Red Hat OpenShift. Think of CNI plug-in chaining as similar to a highly choreographed chorus line. It enables an ordered list of network services (e.g. firewalls, Network Address Translation (NAT), Quality of Service (QoS)) via plug-ins that are “linked arm-in-arm” in the network to create a chain of services. For example, CNI plug-in chaining can be used to set up host-port forwarding, adjust sysctl command parameters, and even set bandwidth limits. Optionally, you can even write your own!

Service chaining has been widely adopted in various software-defined network (SDN) and network functions virtualization (NFV) use cases in datacenters (chaining together virtual or physical network functions), carrier network services), and at the virtual customer edge (including virtual customer premises equipment deployments). Security use cases have been increasing over time especially as the industry has moved from multiple security layers at their perimeter towards a Zero Trust Architecture (ZTA), where the users need to be secure wherever they are.

Ace 1 Scene 2: The Multus origin story

What is the inspiration behind Multus?

Red Hat, in collaboration with Intel, the Network Plumbing Working Group (NPWG), the Kubernetes Resource Management Working Group, and the Kubernetes community, is using Multus as part of a reference implementation of the multiple networks specification.

Let’s discuss our cast of characters in a bit more detail. 

NPWG is an informal offshoot of the Kubernetes SIG-Network group. Red Hat helped found this group during Kubecon 2017 in Austin, Texas to address lower-level networking issues in Kubernetes. NPWG formed a de-facto standard CRD for use in expressing intent to attach to multiple networks.

After extensive development and feature discussions during the Kubernetes Developer Summit 2016 after CloudNativeCon | KubeCon Seattle, the Kubernetes Resource Management Working Group was formed in January 2017. This group was originally cast as a temporary initiative to provide guidance back to sig-node and sig-scheduling (primarily). Since that time, the Resource Management Working Group became a formalized entity and worked to create “Device Plug-ins,” which manage the limited resources of devices in order to correctly schedule workloads.

Today, Kubernetes is coming of age, and growth is hitting critical adoption by a significant number of organizations—which has only accelerated with COVID and the need for infrastructure flexibility and portability.

Act 2 Scene 1: Multus rises to the challenge 

There are a number of scenarios for which multiple network interfaces are beneficial.

Traditionally, multiple network interfaces are employed by network functions to provide for separation of control, management, and data/user network planes. They are also used to support different protocols or software stacks and different tuning and configuration requirements. This plug-in can be used to create multiple network interfaces for pods in Kubernetes.

As the use of Kubernetes in Communication Service Providers, Cloud Providers, and cloud-based services become more common, support for multiple network interfaces is becoming increasingly important. Applications such as storage, legacy applications, virtual private network (VPN), and virtual network functions (VNF), as well as multi-tenant networks, require multiple interfaces. 

Multus rises to the challenge to face these complex network scenarios by providing support for multiple interfaces with an CNI plugin that works with all other Kubernetes networking plug-ins.

Consider micro-segmentation, which is a method of creating secure zones in datacenters and cloud deployments. These secure zones make network security more granular by allowing isolation of workloads from each other and securing them individually. At first glimpse, Kubernetes went to market with an extremely basic network model that provided a single, virtualized interface with few options on how it could be configured. This single interface-per-container model limited more advanced configurations, including multicast or VNF such as virtual routers, because those functions rely on access to multiple interfaces.

In a more traditional network setup, firewalls, intrusion prevention systems (IPS), network sandboxes and other security systems inspect and secure traffic coming into the cloud or datacenter from the “front of the house.” 

But Multus enables micro-segmentation—by way of security policies that limit network and application flows between workloads—which provides traffic control over the increased amount of stage left/stage right or east-west communication that occurs between servers, the very type of traffic that bypasses perimeter-focused security tools. With micro-segmentation, if the worst-case scenario—a breach by an antagonist—occurs, the hacker’s ability to laterally explore networks is curtailed.

Act 2 Scene 2: Overcoming networking obstacles

Let’s examine how plug-ins are used in a little more detail.

As previously mentioned, the Multus CNI device plug-in can be used to create multiple network interfaces for pods in Kubernetes. Additionally, Multus can provide a “casting call” to a variety of other plug-ins, depending on the use case such as in high-performance networks.

Kubernetes is frequently used to run workloads of production web applications on a massive scale. When we discuss virtualization and performance, we often mention two important (but somewhat abstract) concepts: the control plane and the data plane. 

Think of the control plane as similar to the director painstakingly blocking the actors’ every position on the stage and guiding all of their movements. In terms of networking or computing, the control plane orchestrates services between entities. NFV control plane functions are similar to the typical Kubernetes workloads. However, data plane functions are a different story. 

While the control plane is the logic behind why the actors are stage left or stage center, the data plane is the method (e.g. walking or running) by which they take their position. In containers, data plane functions require particular attention to extend the capabilities of Kubernetes to support NFV use cases. Similarly, with split data plane and control plane applications like those seen with high-performance IPTV and media streaming connections, the VNF must connect to both the data plane and the control plane (and possibly require a separate management connection).

This situation is where the Single Root-I/O Virtualization (SR-IOV) plug-in takes the spotlight. The Kubernetes SR-IOV network device plug-in extends the capabilities of Kubernetes to address high-performance network I/O by first discovering, and then advertising SR-IOV network virtual functions (VFs) in a Kubernetes host.

What about situations where redundancy is needed? Network Interface Bonding is a bit like having an understudy because it enables the aggregation of multiple network interfaces into a single logical “bonded” interface, which provides network redundancy of an application in the case of failure or unavailability of a network device or path. For Linux operating systems (like Red Hat Enterprise Linux), there are a few different methods of providing bonding such as round-robin or active aggregation. Now, with the power of Multus, you can utilize the Bonding CNI plug-in to create bonded interfaces in container network namespaces.

Act 2 Scene 3: Behind the mask

Multus is a bit like a stage director. It acts as a CNI manager by invoking or loading other CNI plug-ins and enabling the creation of multiple interfaces. 

The process to attach additional network interfaces to pods is a story in three acts: 

  1. First, the setup: Create a CNI configuration as a custom resource.

  2. Next, the rising action: Annotate the Pod with the newly created configuration name.

  3. Finally, the resolution: Viewing the status annotation to verify that the attachment was successful.

Let’s look at this in more detail.

Initially, a master plug-in (e.g., Flannel, Calico, Weave) is identified to manage the primary network interface (eth0) for the pod. Then, other CNI plug-ins, such as SR-IOV and vHost CNI, can create additional pod interfaces (net0, net1, etc.) during their normal instantiation process. The static CNI configuration points to Multus, while each subsequent CNI plug-in, as called by Multus, has configurations that are defined in CRD objects.

Consider this illustration of network interfaces attached to a pod, as provisioned by Multus CNI.

 

Multus Image

In the diagram, the pod has three interfaces attached: eth0, net0 and net1. eth0 connects the Kubernetes cluster network with Kubernetes server and services (e.g., kubernetes api-server, kubelet and so on). net0 and net1 connect to other networks by using other CNI plug-ins (e.g., vlan, vxlan, ptp).

Multus supports reference plug-ins that implement the CNI specification (e.g., flannel, DHCP, Macvlan)  and various third-party plug-ins (e.g., Calico, Weave, Cilium, Contiv). Additionally, Multus supports SR-IOV, DPDK, OVS-DPDK and VPP workloads in Kubernetes with both cloud-native and NFV based applications in Kubernetes.

If you take a peek behind the mask, you’ll see that Multus is more of a meta-plug-in -- cleverly disguised as a CNI plug-in, but designed solely to call one or more other CNI plug-ins, thus allowing the creation of multiple network interfaces for pods in Kubernetes.

Additionally, the CRD functionality for Multus enables us to specify which pods get which interfaces and allows different interfaces depending on the use case.

Act 2 Scene 4: The denouement what does Multus support?

Multus first launched in OpenShift 4.1 and now supports containers managed by Kubernetes as well as virtual machines in containers via KubeVirt with SR-IOV plug-in.

Act 3: Conclusion

All the world’s a stage. While there are a few stand-in players that provide limited functionality for advanced networking scenarios for Kubernetes containers and pods, Multus offers up a powerful performance with multiple network interfaces and the flexibility of multiple chained CNI plug-ins.

Where can I learn more?


執筆者紹介

Doug Smith works on the network team for Red Hat OpenShift. Smith came to OpenShift engineering after focusing on network function virtualization and container technologies in Red Hat's Office of the CTO.

Smith integrates new networking technologies with container systems like Kubernetes and OpenShift. He is a member of the Network Plumbing Working Group and a contributor to OpenShift, Multus, and Whereabouts. Smith's background is in telephony and containerizing open source software solutions to replace proprietary hardware tandem switches using Asterisk, Kamailio, and Homer.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Original series icon

オリジナル番組

エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー