[Linux-cluster] GFS2 and VM High Availability/DRS

Wes Modes wmodes at ucsc.edu
Wed Feb 1 21:43:09 UTC 2012


Howdy, thanks for all your answers here.  With your help (particularly
Digimer), I was able to set up my little two node GFS2 cluster.  I can't
pretend yet to understand everything, but I have a blossoming awareness
of what and why and how.

The way I finally set it up for my test cluster was

 1. LUN on SAN
 2. configured through ESXi as RDM
 3. RDM made available to OS
 4. parted RDM device
 5. pvcreate/vgcreate/lvcreate to create logical volume on device
 6. mkfs.gfs2 to create GFS2 filesystem on volume supported by clvmd and
    cman, etc

It works and that's great.  BUT the lit says VMWare's vMotion/HA/DRS
doesn't support RDM (though others say that isn't a problem)

I am setting up GFS2 on CentOS running on VMWare and a SAN.  We want to
take advantage of VMWare's High Availability (HA) and Distributed
Resource Scheduler (DRS) which allow the VM cluster to migrate a guest
to another host if the guest becomes unavailable for any reason.  I've
come across some contradictory statements regarding the compatibility of
RDMs and HA/DRS.  So naturally, I have some questions:

1)  If my shared cluster filesystem resides on an RDM on a SAN and is
available to all of the ESXi hosts, can I use vMotion and DRS or not? 
If so, what are the limitations?  If not, why not?

2)  If I cannot use an RDM for the cluster filesystem, can I use VMFS so
vmware can deal with it?  What are the limitations of this?

3)  Is there some other magic way using iSCSI connectors or something
bypassing vmware?  Anyone have experience with this?  Can anyone point
me to details docs on this?

Wes

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20120201/f610a43d/attachment.htm>


More information about the Linux-cluster mailing list