[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] adding volume to cluster

On Thu, Oct 2, 2008 at 12:24 PM, John Ruemker <jruemker redhat com> wrote:
Terry Davis wrote:
Awesome.  I rebooted and applied all available updates and now it works.  Only thing worth noting in the updates was a kernel update to 2.6.18-92.1.13.el5.  I think a reboot did it (for some reason).

On Wed, Oct 1, 2008 at 12:06 PM, Terry Davis <terrybdavis gmail com <mailto:terrybdavis gmail com>> wrote:

   On Wed, Oct 1, 2008 at 11:42 AM, Alasdair G Kergon <agk redhat com
   <mailto:agk redhat com>> wrote:

       I hope that problem was fixed in newer packages.

       Meanwhile try running 'clvmd -R' between some of the commands.

       If all else fails, you may have to kill the clvmd daemons in
       the cluster
       and restart them, or even add a 'vgscan' on each node before
       the restart.

       agk redhat com <mailto:agk redhat com>

   Just a sanity check.  I killed all the clvmd daemons and started
   clvmd back up.  I created the PV on node A:

   [root omadvnfs01a ~]# pvcreate /dev/sdh1
     Physical volume "/dev/sdh1" successfully created

   Node B knows nothing of /dev/sdh1 but it does exist:
   [root omadvnfs01b ~]# ls /dev/sdh*

This is the problem.  If you partition the device on one node, you must do a 'partprobe' on all nodes so that they update their partition tables.  Without doing this LVM has no idea what /dev/sdh1 is and therefore cannot lock on it.  After running partprobe do 'clvmd -R' so that clvmd reloads its device cache and knows which devices are available.  After that you can proceed with pvcreate, vgcreate, lvcreate, etc.

Ahhhh, the step that I was missing all along.  I have gone ahead and carved that into the back of my hand with a dull pencil so I don't forget next time.

Thanks for the help!

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]