[Ovirt-devel] NIC Bonding and Failover

Chris Lalancette clalance at redhat.com
Thu Sep 11 15:15:52 UTC 2008


Daniel P. Berrange wrote:
> On Thu, Sep 11, 2008 at 10:30:25AM -0400, Darryl L. Pierce wrote:
>> In order to make this happen, the following flow occurs during the node's
>> bootup:
>>
>>  1. the node submits its hardware details, including the list of NICs 
>>  2. the server updates the database, deleting any records for NICs that
>>     weren't reported, and saving records for new NICs reported
>>  3. the node makes a request to the new managed node controller, asking for
>>     the configuration file
>>     a. previously this was a hard-coded file, now it's a generated file
>>     b. the node submits the list of mac addresses mapped to the interface
>>        names for the system
>>     c. the returned configuration will contain at most two sections:
>>        1. a pre-augtool script
>>        2. an augtool file
>>  4. the configuration file is saved to /var/tmp/node-config 
>>  5. the configuration file is then passed to bash for execution, to extract
>>     the two files
>>  6. if the file /var/tmp/pre-config-script exists, it is executed
>>     a. this segment loads the bonding kernel module with the correct 
>>        bonding mode
>>  7. if the file /var/tmp/node-augtool exists, then it is passed to augtool
>>  8. the network service is then restarted and the bonding is available.
>>
>> To configure a node for bonding/failover/load balancing on the server, the
>> admin has to set a bonding type for the node. The choices are:
>>
>> 1. Load Balancing 
>> 2. Failover
>> 3. Broadcast
>> 4. Link Aggregation
>>
>> Only one type can be set per-node. 
> 
> Is that a limitation of the linux bonding driver, or an explicit design
> choice ?
> 
> If I have a system with lots of NICs I could imagine that the storage
> LAN might want a different bonding config from the guest LAN, from
> the management LAN.  Then again you could argue that in that case 
> you can just set 2 pairs for each LAN all in Link Aggregation, which
> effectively gives you load balancing/failover  upon failure anyway.

I'm pretty sure modern Linux allows you to have more than one type of bond
active at a time.  I have no idea how to do it, though, so you'll have to
research it.  I'm fine if this is going to be a temporary limitation, but as
danpb points out, you might want to bond differently on machines with lots of NICs.

> 
>> The user will then be able to select two or more NICs on that node and
>> enslave them to a bonded interface. To do that, they will:
>>
>> 1. create a bonded interface and give it a name and an interface name
>> 2. select two or more NICs and associate them with the bonded interface
>>
>> The next time the node boots, it will load the bonding module and pass in 
>> the
>> appropriate mode for the bonding type selected.
>>
>> Questions?
> 
> Don't forget that we need to add briding ontop of that if the bonded pair
> is to be used for the guest LAN. Potentially also bridges ontop of VLANs 
> ontop of bonds.

Yes, this question is very appropriate too.  Again, I'm not saying we have to
support this right now, but we need to be conscious of it.  There are 3 pieces
at work here (bridges, VLAN's, and bonds), which means there are 9 ways you
could do things: VLAN's on top of bonds on top of bridges, bridges on top of
bonds on top of VLAN's, etc.  We have to figure out which combinations are
completely insane, which are valid and make sense, and then make sure we can
handle those.  Yes, this is complicated :(.

-- 
Chris Lalancette




More information about the ovirt-devel mailing list