[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Ovirt-devel] Managed node NIC management



Darryl

Not really involved in this discussion previously so may be a naive question(s) but can you key off of the MAC ID of each of the interfaces ?

I realize that this may change if its not integrated on the mobo (and yes even the mobo macid can be changed), but it would seem to handle the overwhelming majority of the cases. If the mac needs to change, treat it as a new card. btw - is there an existing method for dealing with new interfaces ?

Also (potentially showing more ignorance) I will assume that there may be multiple paths between the managed node and the ovirt server. Are we setting arp_filter=1 in /etc/sysctl.conf ? (we should)
Also, is the PXE service tied to a particular interface or just the server machine,  For instance
if the ovirt server and the manage node each have two nics, one on each subnet, do we need to make sure that the PXE request only goes on a certain subnet or is that handled for us?

-mark


Darryl Pierce wrote:
During a discussion today, we discussed managing NIC configuration for managed nodes from within the server suite.

My understanding of how that would is:

1. managed node runs the awake script, notifying the server suite that it is awake.
2. managed node identifies all of its hardware
3. server suite generates a network configuration file based on what is in the nics table for this managed node
4. managed node pulls down this configuration
5. managed node configures its network interfaces based on the configuration

The main point brought up is that there is no guarantee that eth0 on one boot will be eth0 on the next boot. If a card should get moved, or a kernel upgrade should change the process order, then the interface name can change. As such, we have a challenge of deciding how to response when the interface name for a card changes between boots.

The responses that come to mind for me are:

1. server suite ignores the interface name in the db and uses what is sent up during identification; it sends an alert to the admin that the network has changed 2. server suite returns an error condition to the managed node, alerts the admin and shuts the managed node down 3. server suite returns no configuration file and alerts the admin that the managed node's network was not configured

#2 is pretty heavy handed. #1 seems a reasonable response on first blush, since it's assumed that the only thing different is the interface's name.

So, I'd like some input from everybody on what we ought to do. One suggestion that came up during our discussion was passing in some network configuration parameters with the boot parameters. I'll be honest, I'm not sure how we would go about that. But, if someone can help explain it to me, I'd be glad to explore that path.

Thoughts?

--
Darryl L. Pierce, Sr. Software Engineer
Red Hat, Inc. - http://www.redhat.com/
oVirt - Virtual Machine Management - http://www.ovirt.org/
"What do you care what other people think, Mr. Feynman?"




------------------------------------------------------------------------

_______________________________________________
Ovirt-devel mailing list
Ovirt-devel redhat com
https://www.redhat.com/mailman/listinfo/ovirt-devel


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]