[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] LVM2 over GNBD import?


I am currently playing around with the GFS and the cluster tools which I
checked out of CVS on December 3rd.  I've created a mock setup using three
cluster nodes, one of which is exporting a disk via GNBD over a private
network to the other two nodes.

They physical setup looks like this:


What I thought I wanted to do was export /dev/hdc from pool-serv1 over GNBD,
import it on one nfs-serv node, use LVM2 to create some logical volumes, mount
it on the other nfs-serv node and then slap GFS over the logical volumes so
both nfs-servers can utilize the same LV concurrently. I either get a "Device
/dev/gnbd/pool1 not found" error or a metadata error depending on the state of
the partition table on the said block device. I get "Device /dev/gnbd/pool1
not found" when there are no partitions on the disk and I get the metadata
error when I use 'dd' to zero out the partition table.

I can execute pvcreate on pool-serv1 sometimes, but not others and I can't
figure out under which situations it "works" and which it doesn't exact.
Either way, when it "works", none of the other nodes seem to see the PV, VG or
LGs I create locally on pool-serv1.

What I've done so far on every node:

cman_tool join
fence_tool join

After those commands, every node seems to join the cluster perfectly. Looking
in /proc/cluster confirms this.

My /etc/cluster/cluster.conf is as follow:

<?xml version="1.0"?>
<cluster name="GNBD_SAN_TEST" config_version="1">

        <multicast addr=""/>

  <clusternode name="wolverine" votes="1">
          <method name="single">
                <device name="human" ipaddr=""/>

        <multicast addr="" interface="eth1"/>

  <clusternode name="skunk" votes="1">
          <method name="single">
                <device name="human" ipaddr=""/>
        <multicast addr="" interface="eth1"/>

  <clusternode name="pool-serv1" votes="1">
          <method name="single">
                <device name="human" ipaddr=""/>
        <multicast addr="" interface="eth0"/>

        <fencedevice name="human" agent="fence_manual"/>


I'm thinking this might have something to do with fencing since I've read that
you need to fence GNBD nodes using fence_gnbd but I have no actual foundation
for that assumption. Also, my understanding of fencing is... poor.

I suppose my major question is this:

How should I be setting this up? I want the two nfs-servers to both import the
same GNBD export (shared storage), see the same LVs on that GNBD block device
and put GFS on the LVs so both nfs-servers can read/write to the GNBD block
device at the same time. If I'm going about this totally the wrong way, please

Any insight would be helpful.

Ryan Thomson

Ryan Thomson, Systems Administrator
University Of Calgary, Biocomputing
Phone: (403) 220-2264
Email: thomsonr ucalgary ca

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]