[Linux-cluster] ha-lvm

Jonathan Barber jonathan.barber at gmail.com
Wed Nov 3 10:00:42 UTC 2010


On 3 November 2010 02:15, Jankowski, Chris <Chris.Jankowski at hp.com> wrote:
> Corey,
>
> I vaguely remember from my work on UNIX clusters many years ago that if /dir is the mount point of a mounted filesystem then cd /dir or into any directory below /dir from an interactive shell will prevent an unmount of the filesystem i.e. umount /dir will fail.  I believe that this restriction is because it will create an inconsistency in the state of the shell process. lsof will not show it.

lsof does show this:
$ mkdir /scratch/foo
$ cd /scratch/foo
$ lsof +D /scratch/foo
COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF   NODE NAME
bash    3060 x01024  cwd    DIR  253,4     4096 303105 /scratch/foo
lsof    4606 x01024  cwd    DIR  253,4     4096 303105 /scratch/foo
lsof    4607 x01024  cwd    DIR  253,4     4096 303105 /scratch/foo

This is on fedora 13 with an ext3 FS, but it also true for RHEL4 and 5.

> Of course most users after login end up in the home directory by default.
>
> I believe that Linux will have the same semantics as UNIX. You can test that easily on a standalone Linux box.
>
> Regards,
>
> Chris Jankowski
>
>
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Corey Kovacs
> Sent: Wednesday, 3 November 2010 07:15
> To: linux clustering
> Subject: [Linux-cluster] ha-lvm
>
> Folks,
>
> I have a 5 node cluster backed by an FC SAN with 5 VG's each with a single LVM.
>
> I am using ha_lvm and have lvm.conf configured to use tags as per the instructions. Things work fine until I try to migrate the volume containing our home dir (all others work as expected) The umount for that volume fails and depending on the active config, the node reboots itself (self_fence=1) or it simply fails and get's disabled.
>
> lsof doesn't reveal anything "holding" onto that mount point yet the umount fails consistently (force_umount is enabled)
>
> Furthermore, it appears I have at least one ov my VG's with bad tags, is there a way to show what tags a VG has?
>
> I've gone over the config several times and although I cannot show the config, here is a basic rundown in case something jumps out...
>
> 5 nodes, dl360g5 2xQcore w/16GB ram
> EVA8100
> 2x4GB FC, multipath
> 5VG's each w/a single lv each with an ext3 fs.
> ha lvm in is use as a measure of protection for the ext3 fs's local locking only via lvm.conf tags enabled via lvm.conf initrd's are newer than the lvm.conf changes.
>
> I did notice that the ext3 label in use on the home volume was not of the form /home (it was /ha_home) from early testing but I've corrected that and the umount fail still occurs.
>
> If anyone has any ideas I'd appreciate it.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



-- 
Jonathan Barber <jonathan.barber at gmail.com>




More information about the Linux-cluster mailing list