[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Bonding Interfaces: Active Load Balancing & LACP



On Wed, 6 Jun 2012 21:12:13 -0700 (PDT), Eric <epretorious yahoo com>
wrote:
> I'm currently using the HP Procurve 2824 24-port Gigabit Ethernet switch
> to for a backside network for synchronizing file systems between the
nodes
> in the group. Each host has 4 Gigabit NIC's and the goal is to bond two
of
> the Gigabit NIC's together to create a 2 Gbps link from any host  to any
> other host but what I'm finding is that the bonded links are only
capable
> of 1 Gbps from any host to any other host. Is it possible to 
> create a multi-Gigabit link between two hosts (without having to upgrade
> to 10G) using a switch that "uses the 
> SA/DA (Source Address/Destination Address) method of distributing 
> traffic across the trunked links"?
> 
> 
> The problem, at least as far as I can tell, comes down to the 
> limitation of ARP resolution (in the host) and mac-address tables (in 
> the switch):
> When configured to use Active Load Balancing, the kernel driver leaves
> each of the interface's MAC 
> addresses unchanged. In this scenario, when Host A sends sends traffic 
> to host Host B, the kernel uses the MAC address of only one of Host B's 
> NIC's as the DA. When the packet arrives at the switch, the switch 
> consults the mac-address table for the DA and then sends the packet to 
> the interface connected to the NIC with MAC address equal to DA. Thus 
> packets from Host A to Host B will only leave the switch through one 
> interface - the interface connected to the NIC with MAC address equal to
> DA. This has the effect of limiting the throughput from Host A to Host B
to
> the speed of the one interface connected to the NIC with MAC address
equal
> to DA.
> 
> When configured to use IEEE 802.3ad (LACP), the kernel driver assigns
the
> same MAC address to all of the hosts' 
> interfaces. In this scenario, when Host A sends traffic to Host B, the 
> kernel uses Host B's shared MAC address as the DA. When the packet 
> arrives at the switch, the switch creates a hash based on the SA/DA 
> pair, consults the mac-address table for the DA, and and assigns the 
> flow (i.e., traffic from Host A to Host B) to one of the interfaces 
> connected to Host B. Thus packets from Host A to Host B will only leave 
> the switch through one interface - the interface determined by the SA/DA
> hash. This has the effect of limiting the throughput from Host A to Host
B
> to the speed of the one interface determined by the hashing method.
> However, if the flow (from Host A to Host B's shared MAC 
> address) were to be distributed across the different interfaces in a 
> round-robin 
> fashion (as the 
> packets were leaving the switch) the throughput between the hosts would 
> equal the aggregate of 
> the links (IIUC).
> 
> Is this a limitation of the the Procurve's 
> implementation of LACP? Do other switches use  different methods of 
> distributing traffic across the trunked links? Is there another method 
> of aggregating the links between the two hosts (e.g., multipathing)?
> 

Not sure if you can choose a different hashing mode on Procurve, but
Netgear GSM7352 for example supports hashing by IP and port among other
modes:

1. Source MAC, VLAN, EtherType, and port ID
2. Destination MAC, VLAN, EtherType, and port ID
3. Source IP and source TCP/UDP port
4. Destination IP and destination TCP/UDP port
5. Source/Destination MAC, VLAN, EtherType and port
6. Source/Destination IP and source/destination TCP/UDP port

By using LACP with mode 6 for example you may get more bandwidth for
several applications (running simultaneously), but still limited to 1G for
a single socket

> TIA,
> Eric Pretorious
> Truckee, CA


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]