[Linux-cluster] gfs mounted but not working

Marc linuxr at gmail.com
Mon Nov 6 01:02:16 UTC 2006


This is as far as I got recently, and I called RH support and got nowhere.
GFS and RHCS are apparently very flaky and despite their supposedly being
mission critical and enterprise ready.  This sounds like 'split brain'
configuration where you finagle with the crappy software until each node is
kind of a cluster unto itself but not all together.

Maybe Red Hat inc, can make a pretty red HTML applet that can convince me
otherwise.  In the meantime, despite a ton of hard work on my part, it just
convinced my client to switch to Microsoft clustering as fast as possible.


Actually I don't know which is the most disappointing - the flaky redhat
software or the flaky redhat people.  Something to ponder as I remove the
shadowman logo from my car.


Sincerely,
another Ubuntu convert






On 11/5/06, romero.cl at gmail.com <romero.cl at gmail.com> wrote:
>
> Hi.
>
> I'm trying your method, but still have a problem:
>
> Note: /dev/db2/ is a local partition on my second SCSI hard drive (no
> RAID)
> runing on HP ProLiant
>
> On node3:
> # /usr/sbin/vgcreate vg01 /dev/sdb2
>   Volume group "vg01" successfully created
> # /usr/sbin/vgchange -cy vg01
>   Volume group "vg01" is already clustered
> # /usr/sbin/lvcreate -n node3_lv -L 67G vg01
>   Error locking on node node4: Internal lvm error, check syslog
>   Failed to activate new LV.
>
>   --->On node4 log : lvm[6361]: Volume group for uuid not found:
> sgufJEs53VJSJTKG0vA1dLHXTthjnFctmfjC6YddzZvY3LI6db300wqEp8H0H58H
>
>
> Then I can mount /dev/vg01/node3_lv as gfs on node3, but node4 can't view
> the new files.
>
> What i'm trying to do is to mount 2 partitions (one on node3, the other on
> node4) as one big shared drive using gfs and then expand this to 4 nodes.
>
> Any help is well appreciated!!! (i'm a cluster newbie)
> Thanks.
>
>
> > Hi,
> >
> > When using GFS in a clustered environment, I strongly recommend you use
> LVM
> > rather than using the raw device for your GFS partition.  Without a
> > clustered
> > LVM of some sort, there is no locking coordination between the nodes.
> > I'm assuming, of course, that device sdb is some kind of shared storage,
> > like a SAN.
> >
> > For example, assuming that your /dev/sdb2 has no valuable data yet, I
> > recommend
> > doing something like this:
> >
> > pvcreate /dev/sdb2
> > vgcreate your_vg /dev/sdb2  (where "your_vg" is the name you choose for
> > your new vg)
> > vgchange -cy your_vg (turn on the clustered bit)
> > lvcreate -n your_lv -L 500G your_vg (where 500G is the size of your file
> > system,
> >                 and your_lv is the name you choose for your lv)
> > gfs_mkfs -p lock_dlm -t node1_cluster:node1_gfs -j 8
> /dev/your_vg/your_lv
> > (on only one node)
> > At this point you've got to bring up the cluster infrastructure, if it
> > isn't already up.
> > Next, mount the logical volume from both nodes:
> > mount -tgfs /dev/your_vg/your_lv /users/home
> >
> > Now when you touch a file on one node, the other node should see it.
> >
> > I hope this helps.
> >
> > Regards,
> >
> > Bob Peterson
> > Red Hat Cluster Suite
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20061105/08ad0118/attachment.htm>


More information about the Linux-cluster mailing list