[Linux-cluster] strange GFS problem
Stanislav Sedov
stas at core.310.ru
Tue Jun 14 22:08:24 UTC 2005
On Tue, Jun 14, 2005 at 03:14:57PM -0700, Igor wrote:
> I've a two node linux cluster. Need to share 1 HD to
> read/write simultaneously, from either box. Should
> either fail, I need the other to keep functioning.
>
> I created 1 lvm on each server (changed lvm.conf as
> per instructions). Did gfs_mkfs -- no errors. Ran
> this (on each server)
>
> > ccsd
> > cman_tool join
> > fence_tool join
> > clvmd
> > vgchange -aly
> > mount -t gfs /dev/gfs_vg/lvol0 /data
>
> No problem. Now, when I create a file on each server,
> it doesn't appear on the other. What's the deal?
>
> (this is my cluster.conf:)
> <?xml version="1.0"?>
> <cluster name="acme_cluster" config_version="1">
>
> <cman two_node="1" expected_votes="1">
> </cman>
>
> <clusternodes>
> <clusternode name="10.1.1.1" votes="1">
> <fence>
> <method name="single">
> <device name="human" ipaddr="10.1.1.1"/>
> </method>
> </fence>
> </clusternode>
> <clusternode name="10.1.1.2" votes="1">
> <fence>
> <method name="single">
> <device name="human" ipaddr="10.1.1.2"/>
> </method>
> </fence>
> </clusternode>
> </clusternodes>
>
> <fence_devices>
> <fence_device name="human" agent="fence_manual"/>
> </fence_devices>
>
> </cluster>
>
> [root at acmegrid1 data]# cat /proc/cluster/status
> Protocol version: 5.0.1
> Config version: 1
> Cluster name: acme_cluster
> Cluster ID: 14004
> Cluster Member: Yes
> Membership state: Cluster-Member
> Nodes: 2
> Expected_votes: 1
> Total_votes: 2
> Quorum: 1
> Active subsystems: 6
> Node name: 10.1.1.1
> Node addresses: 10.1.1.1
>
> I'd really appreciate any help with this. Thank you.
>
>
>
> __________________________________
> Discover Yahoo!
> Get on-the-go sports scores, stock quotes, news and more. Check it out!
> http://discover.yahoo.com/mobile.html
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> http://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
Have you properly select locking mechanism when mkfs_gfs?
It seems like you use lock_nolock.
More information about the Linux-cluster
mailing list