Contattaci

In this day and age, where almost everything is connected to the World Wide Web, the demands on networking (in general) are mushrooming. In the developed world it’s common to be able to get 20 megabits per second connections on our mobile devices and 50 megabits per second connections at home. By extension, the demands on enterprise data centers are even higher (by at least three to four orders of magnitude) - as these central “hubs” are where traffic from the aforementioned individual end nodes converge. Consider the act of flipping through a series of cloud-hosted HD photos on a mobile device – this can easily result in billions of packets being transferred (in fractions of a second).

The good news is that our networking interfaces are getting “bigger and faster.” 40 gigabit per second Ethernet is currently being deployed, and work to finalize on 100 gigbit per second end point interfaces is currently underway.

As one might imagine, high throughput interfaces also call for link aggregation - aggregation in active-backup mode, or in active-active mode, depending on the application. Link aggregation, for those who may be new to the concept, means making two physical links look like one logical link at the L2 layer.

Red Hat Enterprise Linux has, for some time, provided users with a bonding driver to achieve link aggregation. In fact, bonding works well for most applications. That said, the bonding driver's architecture is such that the control, management, and data paths are all managed in the kernel space... limiting its flexibility.

So where am I headed with this?  Well, you may have heard that Red Hat Enterprise Linux 7 has introduced a team driver...

The team driver is not trying to replicate or mimic the bonding driver, it has actually been designed to solve the same problem(s) using a wholly different design and different approach; an approach where special attention was paid to flexibility and efficiency. The best part is that the configuration, management, and monitoring of team driver is significantly improved with no compromise on performance, features, or throughput.

Coming full circle (you read the title, right?) the team driver can pretty much be summarized by this sentence: if you like bonding, you will love teaming.

Side by Side

Team driver supports all of the most commonly used features of bonding driver, and supports many more features. The following table facilitates an easy side-by-side comparison.

FeatureBondingTeam
broadcast TX policyYesYes
round-robin TX policyYesYes
active-backup TX policyYesYes
LACP (802.3ad) supportYesYes
hash-based TX policyYesYes
TX load-balancing support (TLB)YesYes
VLAN supportYesYes
LACP hash port selectYesYes
Ethtool link monitoringYesYes
ARP link monitoringYesYes
ports up/down delaysYesYes
configurable via Network Manager (gui, tui, and cli)YesYes
multiple device stackingYesYes
highly customizable hash function setupNoYes
D-Bus interfaceNoYes
ØMQ interfaceNoYes
port priorities and stickiness ("primary" option enhancement)NoYes
separate per-port link monitoring setupNoYes
logic in user-spaceNoYes
modular designNoYes
NS/NA (IPV6) link monitoringNoYes
load-balancing for LACP supportNoYes
lockless TX/RX pathNoYes
user-space runtime controlLimitedFull
multiple link monitoring setupLimitedYes
extensibilityHardEasy
performance overheadLowVery Low
RX load-balancing support (ALB)YesPlanned
RX load-balancing support (ALB) in bridge or OVSNoPlanned

Interested in giving it a shot?  It’s not that difficult to migrate from bonding to teaming.

Migration

To facilitate migration from bonding driver to team driver we have created a robust migration script called bond2team. Please see manual pages of bond2team (man 1 bond2team) for available options.  In essence this script allows existing deployments of bonded interfaces to be moved to teamed interfaces seamlessly.

Demos

Curious to see a demo before you pull the trigger? While a link to the more technical details associated with team driver can be found here - you can see the team driver in action here.

Performance

Machine type: 3.3Ghz CPU (Intel), 4GB RAM
Link Type: 10GFO
InterfacePerformance with64byte packetsPerformance with   1KB packetsPerformance with 64KB packetsAverage Latency
eth01664.00Mb/s (27.48%CPU)8053.53Mb/s (30.71%CPU)9414.99Mb/s (17.08%CPU)54.7usec
eth11577.44Mb/s (26.91%CPU)7728.04Mb/s (32.23%CPU)9329.05Mb/s (19.38%CPU)49.3usec
bonded (eth0+eth1)1510.13Mb/s (27.65%CPU)7277.48Mb/s (30.07%CPU)9414.97Mb/s (15.62%CPU)55.5usec
teamed (eth0+eth1)1550.15Mb/s (26.81%CPU)7435.76Mb/s (29.56%CPU)9413.8Mb/s (17.63%CPU)55.5usec

Before I sign off - I also wanted to share this table (above). In short, team driver performance is largely equal to or better than respective bonding driver performance where all other variables are held in check.

That's not all folks!  For additional information on team driver I strongly encourage you to read through some additional details that we've made available here.


Red Hat logoLinkedInYouTubeFacebookTwitter

Prodotti

Strumenti

Prova, acquista, vendi

Comunica

Informazioni su Red Hat

Red Hat è leader mondiale nella fornitura di soluzioni open source per le aziende, tra cui Linux, Kubernetes, container e soluzioni cloud. Le nostre soluzioni open source, rese sicure per un uso aziendale, consentono di operare su più piattaforme e ambienti, dal datacenter centrale all'edge della rete.

Ricevi la nostra newsletter, Red Hat Shares

Iscriviti subito

Seleziona la tua lingua