Please note: teamd is deprecated in Red Hat Enterprise Linux 9 and is planned for removal in Red Hat Enterprise Linux 10.
Liking bonding did not, in fact, equate to loving teaming. The premise was good, providing an alternative where functions and tools were put into userspace leading to less potential for kernel issues and tools that could be updated independently of the kernel. With its JSON-based configuration aligned with NIC teaming for non-linux operating systems, the ability to use commands like teamdctl in user space, it seemed like a step forward.
Alas, not every new project becomes a systemd, wayland, cockpit, cgroup or many of the other projects started at Red Hat that have become popular Linux-wide technology standards. When a project lacks industry uptake and has a duplicate mechanism also shipping in Red Hat-based distributions, we have to ask ourselves, “Should we continue to do this?” In the case of teamd, the answer was no. As a result, we can spend our engineering resources on bonding instead of splitting time between both bonding and teaming.
So, you are using teamd and want to stop, check out the team2bond utility included in the teamd package. It will convert your NIC teaming configuration to a compatible NIC bonding configuration. For more details, check out Configuring and managing networking, Chapter 8.1 Migrating a network
In this day and age, where almost everything is connected to the World Wide Web, the demands on networking (in general) are mushrooming. In the developed world it’s common to be able to get 20 megabits per second connections on our mobile devices and 50 megabits per second connections at home. By extension, the demands on enterprise data centers are even higher (by at least three to four orders of magnitude) - as these central “hubs” are where traffic from the aforementioned individual end nodes converge. Consider the act of flipping through a series of cloud-hosted HD photos on a mobile device – this can easily result in billions of packets being transferred (in fractions of a second).
The good news is that our networking interfaces are getting “bigger and faster.” 40 gigabit per second Ethernet is currently being deployed, and work to finalize on 100 gigbit per second end point interfaces is currently underway.
As one might imagine, high throughput interfaces also call for link aggregation - aggregation in active-backup mode, or in active-active mode, depending on the application. Link aggregation, for those who may be new to the concept, means making two physical links look like one logical link at the L2 layer.
Red Hat Enterprise Linux has, for some time, provided users with a bonding driver to achieve link aggregation. In fact, bonding works well for most applications. That said, the bonding driver's architecture is such that the control, management, and data paths are all managed in the kernel space... limiting its flexibility.
So where am I headed with this? Well, you may have heard that Red Hat Enterprise Linux 7 has introduced a team driver...
The team driver is not trying to replicate or mimic the bonding driver, it has actually been designed to solve the same problem(s) using a wholly different design and different approach; an approach where special attention was paid to flexibility and efficiency. The best part is that the configuration, management, and monitoring of team driver is significantly improved with no compromise on performance, features, or throughput.
Coming full circle (you read the title, right?) the team driver can pretty much be summarized by this sentence: if you like bonding, you will love teaming.
Side by Side
Team driver supports all of the most commonly used features of bonding driver, and supports many more features. The following table facilitates an easy side-by-side comparison.
Feature | Bonding | Team | |
broadcast TX policy | Yes | Yes | |
round-robin TX policy | Yes | Yes | |
active-backup TX policy | Yes | Yes | |
LACP (802.3ad) support | Yes | Yes | |
hash-based TX policy | Yes | Yes | |
TX load-balancing support (TLB) | Yes | Yes | |
VLAN support | Yes | Yes | |
LACP hash port select | Yes | Yes | |
Ethtool link monitoring | Yes | Yes | |
ARP link monitoring | Yes | Yes | |
ports up/down delays | Yes | Yes | |
configurable via Network Manager (gui, tui, and cli) | Yes | Yes | |
multiple device stacking | Yes | Yes | |
highly customizable hash function setup | No | Yes | |
D-Bus interface | No | Yes | |
ØMQ interface | No | Yes | |
port priorities and stickiness ("primary" option enhancement) | No | Yes | |
separate per-port link monitoring setup | No | Yes | |
logic in user-space | No | Yes | |
modular design | No | Yes | |
NS/NA (IPV6) link monitoring | No | Yes | |
load-balancing for LACP support | No | Yes | |
lockless TX/RX path | No | Yes | |
user-space runtime control | Limited | Full | |
multiple link monitoring setup | Limited | Yes | |
extensibility | Hard | Easy | |
performance overhead | Low | Very Low | |
RX load-balancing support (ALB) | Yes | Planned | |
RX load-balancing support (ALB) in bridge or OVS | No | Planned |
Interested in giving it a shot? It’s not that difficult to migrate from bonding to teaming.
Migration
To facilitate migration from bonding driver to team driver we have created a robust migration script called bond2team. Please see manual pages of bond2team (man 1 bond2team) for available options. In essence this script allows existing deployments of bonded interfaces to be moved to teamed interfaces seamlessly.
Demos
Curious to see a demo before you pull the trigger? While a link to the more technical details associated with team driver can be found here - you can see the team driver in action here.
Performance
Machine type: 3.3Ghz CPU (Intel), 4GB RAM | ||||
Link Type: 10GFO | ||||
Interface | Performance with 64byte packets | Performance with 1KB packets | Performance with 64KB packets | Average Latency |
eth0 | 1664.00Mb/s (27.48%CPU) | 8053.53Mb/s (30.71%CPU) | 9414.99Mb/s (17.08%CPU) | 54.7usec |
eth1 | 1577.44Mb/s (26.91%CPU) | 7728.04Mb/s (32.23%CPU) | 9329.05Mb/s (19.38%CPU) | 49.3usec |
bonded (eth0+eth1) | 1510.13Mb/s (27.65%CPU) | 7277.48Mb/s (30.07%CPU) | 9414.97Mb/s (15.62%CPU) | 55.5usec |
teamed (eth0+eth1) | 1550.15Mb/s (26.81%CPU) | 7435.76Mb/s (29.56%CPU) | 9413.8Mb/s (17.63%CPU) | 55.5usec |
Before I sign off - I also wanted to share this table (above). In short, team driver performance is largely equal to or better than respective bonding driver performance where all other variables are held in check.
That's not all folks! For additional information on team driver I strongly encourage you to read through some additional details that we've made available here.
저자 소개
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.