[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Trying to get GNBD with GFS Working



On Mon, Jul 26, 2004 at 03:13:35PM -0400, Rory_Savage consultant peoplesoft com wrote:
> 
> 
> 
>
> Please Help!
> 
> I have a two node cluster (hal-n1, and hal-n2).  I exported the /dev/hda4
> filesystem from hal-n2
> 
> [root hal-n2 cluster]# gnbd_export -c -v -e export1 -d /dev/hda4
> gnbd_export: created GNBD export1 serving file /dev/hda4
> 
> log file:
> 
> Jul 26 14:22:35 hal-n2 gnbd_serv[3853]: gnbd device 'export1' serving
> /dev/hda4 exported with 130897620 sectors
> 
> While trying to import the device on hal-n1, I am reciving the following
> error:
> 
> [root hal-n1 src]# gnbd_import -v -i hal-n2
> gnbd_import: ERROR cannot get /sys/class/gnbd/gnbd0/name value : No such
> file or directory

GNBD requires sysfs to run. Somewhere in you kernel config file, you should
have:

CONFIG_SYSFS=y


Then run the command:
# mount -t sysfs sysfs /sys
to mount sysfs.

For more information on sysfs, see Documentation/filesystems/sysfs.txt
 
Hope this helps

-Ben

bmarzins redhat com

> * My first reaction is, when did the "/sys" ever need to be in existance?
> I examined all of the build options for GNBD and could not find a
> prefrecnce location setting for anything related to "/sys".  And I know
> this directory is not native to Red Hat (that I know of).
> 
> System Configuration and Parameters
> 
> Kernel 2.6.7 from source
> 
> Kernel Config Options:
> 
> CONFIG_MD=y
> CONFIG_BLK_DEV_MD=m
> CONFIG_MD_LINEAR=m
> CONFIG_MD_RAID0=m
> CONFIG_MD_RAID1=m
> CONFIG_MD_RAID5=m
> CONFIG_MD_RAID6=m
> CONFIG_MD_MULTIPATH=m
> CONFIG_BLK_DEV_DM=m
> CONFIG_DM_CRYPT=m
> CONFIG_BLK_DEV_GNBD=m
> 
> CONFIG_CLUSTER=m
> CONFIG_CLUSTER_DLM=m
> CONFIG_CLUSTER_DLM_PROCLOCKS=y
> 
> CONFIG_LOCK_HARNESS=m
> CONFIG_GFS_FS=m
> CONFIG_LOCK_NOLOCK=m
> CONFIG_LOCK_DLM=m
> CONFIG_LOCK_GULM=m
> 
> # GFS and GNBD sources obtain via CVS
> 
> [root hal-n1 src]# cat /proc/cluster/nodes
> Node  Votes Exp Sts  Name
>    1    1    1   M   hal-n1
>    2    1    1   M   hal-n2
> 
> [root hal-n1 src]# cat /proc/cluster/services
> 
> Service          Name                              GID LID State     Code
> Fence Domain:    "default"                           1   2 join
> S-6,20,1
> [1]
> 
> DLM Lock Space:  "clvmd"                             2   3 run       -
> [1 2]
> 
> [root hal-n1 src]# cat /proc/cluster/status
> Version: 2.0.1
> Config version: 1
> Cluster name: xcluster
> Cluster ID: 28724
> Membership state: Cluster-Member
> Nodes: 2
> Expected_votes: 1
> Total_votes: 2
> Quorum: 1
> Active subsystems: 3
> Node addresses: 10.1.1.1
> 
> [root hal-n2 cluster]# cat /proc/cluster/nodes
> Node  Votes Exp Sts  Name
>    1    1    1   M   hal-n1
>    2    1    1   M   hal-n2
> 
> [root hal-n2 cluster]# cat /proc/cluster/services
> 
> Service          Name                              GID LID State     Code
> Fence Domain:    "default"                           0   2 join
> S-1,80,2
> []
> 
> DLM Lock Space:  "clvmd"                             2   3 run       -
> [1 2]
> 
> [root hal-n2 cluster]# cat /proc/cluster/status
> Version: 2.0.1
> Config version: 1
> Cluster name: xcluster
> Cluster ID: 28724
> Membership state: Cluster-Member
> Nodes: 2
> Expected_votes: 1
> Total_votes: 2
> Quorum: 1
> Active subsystems: 3
> Node addresses: 10.1.1.2
> 
> 
> 
> 
> --
> Rory Savage, Charlotte DSI Group
> Product & Technology
> PeopleSoft Inc.
> 14045 Ballantyne Corporate Place
> Suite 101
> Charlotte, NC 28277
> Email: rory_savage peoplesoft com
> Phone: 704.401.1104
> Fax: 704.401.1240
> 
> 
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> http://www.redhat.com/mailman/listinfo/linux-cluster


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]