[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: starting Fedora Server SIG

On Fri, Nov 21, 2008 at 02:00:34PM -0600, Les Mikesell wrote:
>> blkid ?
> What does that do for unformatted disks? If it reports an md device, how  
> do I know that I shouldn't separately use the things it separately also  
> reports that might be the underlying elements?  Same for lvm if it shows  
> them, or lvm on top of md?

I don't follow what you are trying to accomplish here.  All the data 
is available to link the devices together in the stack via the already 
existing tools.  It may not be convenient, but it is there.

>>> tool like mii-tool should enumerate your NICs and show which have 
>>> link  established - and any other useful information they can detect. 
>>>  Then,  
>> ethtool ?
> How do I enumerate the devices with ethtool?

Ok, this isn't so great:

for i in `ifconfig -a | cut -d' ' -f1 | sort -u`; do ethtool $i| grep 
-E '^Settings|Link detected'; done

but this works, and I verified that it shows the hardware link status 
(in addition to the ifconfig UP/DOWN status).

ip link show

>> It probably wouldn't take too much to write a script around blkid and  
>> ethtool to do this.
> Don't forget these steps are already a level up from where we need to  
> start.  We may be moving an installed system to one where the drivers  
> don't match current hardware, or we may have added new NICs or disk  
> controllers and have to get the appropriate drivers loaded.

I'm not sure I follow what you are saying here.  What was once kudzu's 
job is now handled by hal/udev, which handle hardware changes 
automatically, and the drivers are always there since they are all 
compiled as modules for the kernel...easy of automatic hardware 
detection and driver loading is one reason why Fedora has resisted 
trying to split up the kernel modules into separate packages, and why 
all the X11 video card drivers are installed by default.  Are you 
against running hal/udev because they are userspace daemons that take 
up memory?

> And the scripts need to accommodate things that can't be enumerated too,  
> like nfs/iscsi mounts and multiple vlans on an interface.

Again, I'm not sure what you want from this.  NFS/iSCSI mounts can't 
be automatically discovered from within the installed system 
image--the information about what to mount from where and to where 
needs to come from somewhere.  Perhaps what you want is a centralized 
configuration management system like Puppet or bcfg2?  How does this 
relate to the kernel namespace <--> physical device issue we've been 
discussing above?  How should it work in an ideal world?

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]