[Linux-cluster] GNBD and gfs - wrong FS type

Hal hal_bg at yahoo.com
Thu Jul 19 19:43:36 UTC 2007


Just for the record :)
problem solved! 

For some reason to use gfs1 on FC6 one has to install gfs2 tools.
and since gfs2 from cluster-2.00.00 sources does not compile, gfs2-utils
should be installed and miraculously from this very moment gfs starts 
working...
How can one guess that gfs requires gfs2 tools along with gfs1 tools, just 
like gfs.ko needs gfs2.ko?

Hal

--- Hal <hal_bg at yahoo.com> wrote:

> hallo 
> I have trouble mounting GNBD inported gfs on both nodes of my test clusuer.
> If
> the lock is set to "lock_nolock" it mounts fine but this is not what i want.
> When I use lock_dlm I get: 
> mount: wrong fs type, bad option, bad superblock on /dev/gnbd/global_disk,
>        missing codepage or other error
>        In some cases useful info is found in syslog - try
>        dmesg | tail  or so
> 
> What I am doing wrong?
> Total output follows (Selinux is NOT in enforcing mode):
> 
> [root at node2 ~]# modprobe gnbd
> [root at node2 ~]# modprobe gfs2
> [root at node2 ~]# modprobe gfs
> [root at node2 ~]# modprobe lock_dlm
> [root at node2 ~]# gnbd_import -n -i 192.168.0.60
> gnbd_import: created directory /dev/gnbd
> gnbd_import: created gnbd device global_disk
> gnbd_recvd: gnbd_recvd started
> [root at node2 ~]# cd /etc/init.d/
> [root at node2 init.d]# ./cman start
> Starting cluster: 
>    Loading modules... done
>    Mounting configfs... done
>    Starting ccsd... done
>    Starting cman... done
>    Starting daemons... done
>    Starting fencing... done
>                                                            [  OK  ]
> [root at node2 ~]# gfs_mkfs -p lock_dlm -t testc:gfs1 -j6 /dev/gnbd/global_disk 
> This will destroy any data on /dev/gnbd/global_disk.
>   It appears to contain a gfs filesystem.
> 
> Are you sure you want to proceed? [y/n] y
> 
> Device:                    /dev/gnbd/global_disk
> Blocksize:                 4096
> Filesystem Size:           851880
> Journals:                  6
> Resource Groups:           14
> Locking Protocol:          lock_dlm
> Lock Table:                testc:gfs1
> 
> Syncing...
> All Done
> [root at node2 ~]# mount -t gfs /dev/gnbd/global_disk /mnt
> mount: wrong fs type, bad option, bad superblock on /dev/gnbd/global_disk,
>        missing codepage or other error
>        In some cases useful info is found in syslog - try
>        dmesg | tail  or so
> 
> [root at node2 ~]# dmesg |tail
> GFS: fsid=testc:gfs1.0: Scanning for log elements...
> GFS: fsid=testc:gfs1.0: Found 0 unlinked inodes
> GFS: fsid=testc:gfs1.0: Found quota changes for 0 IDs
> GFS: fsid=testc:gfs1.0: Done
> SELinux: initialized (dev gnbd0, type gfs), uses xattr
> audit(1184744195.259:4): avc:  denied  { getattr } for  pid=1848 comm="hald"
> name="global_disk" dev=tmpfs ino=19253 scontext=system_u:system_r:hald_t:s0
> tcontext=root:object_r:device_t:s0 tclass=blk_file
> Trying to join cluster "lock_dlm", "testc:gfs1"
> Joined cluster. Now mounting FS...
> GFS: fsid=testc:gfs1.4294967295: can't mount journal #4294967295
> GFS: fsid=testc:gfs1.4294967295: there are only 6 journals (0 - 5)
> [root at node2 ~]# 
> 
> 
>  
>
____________________________________________________________________________________
> Now that's room service!  Choose from over 150,000 hotels
> in 45,000 destinations on Yahoo! Travel to find your fit.
> http://farechase.yahoo.com/promo-generic-14795097
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 



       
____________________________________________________________________________________
Moody friends. Drama queens. Your life? Nope! - their life, your story. Play Sims Stories at Yahoo! Games.
http://sims.yahoo.com/  




More information about the Linux-cluster mailing list