[Linux-cluster] clvm and local lvm volumes needed at boot

Patrick Caulfield pcaulfie at redhat.com
Tue Jul 4 06:44:14 UTC 2006


Ramon van Alteren wrote:
> Hi,
> 
> I'm currently testing and building a gfs-cluster using coRAID devices.
> I'm getting good progress so far but I've run into a problem that I
> can't find much docs on.
> 
> We're using a standard setup which includes an number of logical volumes
> for our local filesystems.
> 
> I need those volumes to mount at boot (/usr and /var are part of them)
> However that doesn't succeed ATM because vgscan bails out in my
> startupscripts.
> 
> I've modified my lvm.conf to set the locking_type to 2 and load the
> cluster-aware locking library.
> vgscan gives an error on boot that it can't connect to local socket,
> presumable because it's trying to connect to the rest of the cluster.
> cman and consorts aren't up yet in that stage of the boot-process
> because networking isn't up yet.
> 
> Is it possible to allow for a different locking library per volume group ?
> That way I could detect the local volumes at boot and mount them and add
> another bootscript to detect the cluster volumes later in the boot process.
> 
> Any other solutions, possibilities to solve this ?
> 
> I can't find much documentation on clvm and clustering options so I
> thought I'd ask here.
> I hope this is the right place to ask, if not I would appreciate any
> pointers on where to look.


You need to set the local volume group to non-clustered(and the cluster volume
to clustered) using vgchange -c[yn].

When activating the local VG in the startup scripts add --ignorelockingfailure
to the vgchange -ay command.



-- 

patrick




More information about the Linux-cluster mailing list