[Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP

Digimer lists at alteeve.ca
Thu Jun 7 04:35:32 UTC 2012


I know that the only *supported* bond is Active/Passive (mode=1), which 
of course provides no performance benefit.

I tested all types, using more modest D-Link DGS-3100 switches and all 
other modes failed at some point in failure and recovery testing. If you 
want to experiment, I'd suggest tweaking corosync's timeouts to be 
(much?) more generous.

I'm curious to hear back on what your experimenting finds.

Digimer

On 06/07/2012 12:12 AM, Eric wrote:
> I'm currently using the HP Procurve 2824 24-port Gigabit Ethernet switch
> to for a backside network for synchronizing file systems between the
> nodes in the group. Each host has 4 Gigabit NIC's and the goal is to
> bond two of the Gigabit NIC's together to create a 2 Gbps link from any
> host to any other host but what I'm finding is that the bonded links are
> only capable of 1 Gbps from any host to any other host. Is it possible
> to create a multi-Gigabit link between two hosts (without having to
> upgrade to 10G) using a switch that "uses the SA/DA (Source
> Address/Destination Address) method of distributing traffic across the
> trunked links"?
>
> The problem, at least as far as I can tell, comes down to the limitation
> of ARP resolution (in the host) and mac-address tables (in the switch):
>
> When configured to use Active Load Balancing, the kernel driver leaves
> each of the interface's MAC addresses unchanged. In this scenario, when
> Host A sends sends traffic to host Host B, the kernel uses the MAC
> address of only one of Host B's NIC's as the DA. When the packet arrives
> at the switch, the switch consults the mac-address table for the DA and
> then sends the packet to the interface connected to the NIC with MAC
> address equal to DA. Thus packets from Host A to Host B will only leave
> the switch through one interface - the interface connected to the NIC
> with MAC address equal to DA. This has the effect of limiting the
> throughput from Host A to Host B to the speed of the one interface
> connected to the NIC with MAC address equal to DA.
>
> When configured to use IEEE 802.3ad (LACP), the kernel driver assigns
> the same MAC address to all of the hosts' interfaces. In this scenario,
> when Host A sends traffic to Host B, the kernel uses Host B's shared MAC
> address as the DA. When the packet arrives at the switch, the switch
> creates a hash based on the SA/DA pair, consults the mac-address table
> for the DA, and and assigns the flow (i.e., traffic from Host A to Host
> B) to one of the interfaces connected to Host B. Thus packets from Host
> A to Host B will only leave the switch through one interface - the
> interface determined by the SA/DA hash. This has the effect of limiting
> the throughput from Host A to Host B to the speed of the one interface
> determined by the hashing method. However, if the flow (from Host A to
> Host B's shared MAC address) were to be distributed across the different
> interfaces in a round-robin fashion (as the packets were leaving the
> switch) the throughput between the hosts would equal the aggregate of
> the links (IIUC).
>
> Is this a limitation of the the Procurve's implementation of LACP? Do
> other switches use different methods of distributing traffic across the
> trunked links? Is there another method of aggregating the links between
> the two hosts (e.g., multipathing)?
>
> TIA,
> Eric Pretorious
> Truckee, CA
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster


-- 
Digimer
Papers and Projects: https://alteeve.com




More information about the Linux-cluster mailing list