[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] gfs mounted but not working

romero cl gmail com wrote:


Now i'm trying this and it works! for now...

Two nodes: node3 & node4
node4 export his /dev/sdb2 with gnbd_export as "node4_sdb2"
node3 import node4's /dev/sdb2 with gnbd_import (new /dev/gnbd/node4_sdb2)

on node3:  gfs_mkfs -p lock_dlm -t node3:node3_gfs -j 4 /dev/gnbd/node4_sdb2
                mount -t gfs /dev/gnbd/node4_gfs /users/home

on node4: mount -t gfs /dev/sdb2 /users/home

and both nodes can read an write ths same files on /users/home!!!

Now i'm going for this:

4 nodes on a dedicated 3com 1Gbit ethernet switch:

node2 exporting with gnbd_export /dev/sdb2 as "node2_sdb2"
node3 exporting with gnbd_export /dev/sdb2 as "node3_sdb2"
node4 exporting with gnbd_export /dev/sdb2 as "node4_sdb2"

node1 (main) will import all "nodeX_sdb2" and create a logical volume named
"main_lv" including:

   /dev/sdb2 (his own)

Next I will try to export the new big logical volume with "gnbd_export" and
then do gnbd_import on each node.
With that each node will see "main_lv", then mount it on /users/home as gfs
and get a big shared filesystem to work toghether.

Is this the correct way to do the work??? possibly a deadlock???

Sorry if my english isn't very good ;)

I personally don't know GNBD very well. In theory, it exports block device (via network) and there is nothing wrong to glue them together to form a big LVM partition. However, most of the GNBD configurations I know of are exporting the block devices from a group of server nodes that normally have quite decent disk resources. Then another group of nodes just imports these block devices as GNBD clients. I'm not sure how well the system would work if you mix all GNBD client and server together, particularly under heavy workloads.

-- Wendy

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]