[Linux-cluster] GFS2 locking in a VM based cluster (KVM)

C.D. ccd.stoy.ml at gmail.com
Thu Mar 17 23:39:12 UTC 2011


On Thu, Mar 17, 2011 at 11:47 PM, Rajagopal Swaminathan <
raju.rajsand at gmail.com> wrote:

> Greetings,
>
> On 3/17/11, C.D. <ccd.stoy.ml at gmail.com> wrote:
> > Hello,
> >
> > I'm not trying to bash gfs2, actually I would definitely prefer it over
> > ocfs2 anytime, however it seems it doesn't work well with VM for some
> > reason.
> >
> > I think this should be investigates by someone at RH possibly because
> they
> > are the driving force behind both KVM, libvirt, the cluster soft and
> gfs2.
> >
>
> I am not an employee of Redhat.
>
> 1. As the fastest measure, turn off atime (if on) while remounting
> GFS2 and you will immediately notice a zing in performance.
>
> I always mounted with quota=off,noatime,nodiratime , but with them on or
off there wasn't any difference.


> 2. Why would one use GFS2 to store VM? In the absence of CLVM not
> offering LUNs on which VMs are stored.
>
I never said I used GFS2 to store VM. I used GFS2 as a shared storage FS
inside VMs. I have a shared LUN exported through the SAN and through the
Fabrics switches with multipathing (the host takes care of that). I have a
mpath pool in libvirtd and friendly names = off in multipath.conf, so I see
the WWIN of each LUN inside virsh/virt-manager. I have all LUNS in all
storage groups of each VM host system so I can do migration, but this
doesn't seem as the proper place for such discussion.

What I set up was 2 (or more, but in this specific setup it was 2) VMs, each
has a LUN that uses for the root partition. And those 2 VMs also share one
LUN that I set up with clvmd and gfs2 where I share files between those 2
VMs. All the cluster stuff is working inside the VMs (the guests), the host
doesn't know about the cluster, clvmd or gfs2, nor it should know or care
about what is going on inside the VMs that are runing atop of it. In this
specific scenario those 2 VMs are running web servers and this is the
directory with all the files they serve. I hope this sheds some light on my
exact setup.


>
> I understand that step 2 may not be feasible if the system is in
> production (Well, unless you know that lovely qemu-convert command for
> handlink disk images or something like that and dd incantations which
> I can't off had remember).
>
> But live storage migration is another issue altogether. I haven't had
> a suitable opportunity to cut my teeth in the RHEV Cloud as yet. So no
> comments on it from me as yet.
>
Live storage migration is a non-issue inside my private cloud, as I already
solved that with libvirtd, multipathing, the SAN, the SAN switches, etc. It
took suprisingly not more than 3 or 4 days of poking around and reading
sources, docs and libvirt devel mailing list and I was able to build stable
and high performing solution from the ground up on top of RH technology
(which I happen to love).

>
> Welcome to the Wonderful world of Redhat HA+VM ! (Its not RHEV please.
> RHEV is another, well, lovely and cuddly,  beast altogether)
>
> Get the HA right first, then go for virt during rollouts.
>
> At this point I think I'll stay away from RH HA solutions inside VMs. It's
too much hastle and surprisingly sparse documentation from RH. As most of my
machines are running something that can be load balanced and HA set up
through nginx I would probably go again that way and would try some CARP on
some of the VMs. But that is something for the next stage of my setup.

> HTH
>
>
> Regards,
>
> Rajagopal
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

Thanks for taking the time to respond,

Regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20110318/29f193e8/attachment.htm>


More information about the Linux-cluster mailing list