[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] CLVM not activating LVs



On 26/08/09 14:19, Jakov Sosic wrote:
Hi! CLVM not activating logical volumes.



I have a major issues with CLVM. It is not activating volumes in my
VG's. I have 2 iSCSI volumes, and one SAS volume with 3 VG's. On
node01, all logical volumes in 1 iSCSI and on SAS are activated. Other
iSCSI - zero. On node02 same situation. On node03 only lv's from SAS
are activated. lvm.conf is same on the machines....

This is very strange, because when I boot the machines, all the
services are stopped, so logical volumes shouldn't be activated.

Here is the situation:

[root node01 lvm]# vgs
   VG          #PV #LV #SN Attr   VSize  VFree
   VolGroupC0    1   7   0 wz--nc  3.41T 1.48T
   VolGroupC1    1   0   0 wz--nc  3.41T 3.41T
   VolGroupSAS   1   2   0 wz--nc 20.63G 4.63G

[root node01 lvm]# lvs
   LV            VG          Attr   LSize
   nered1        VolGroupC0  -wi-a- 200.00G
   nered2        VolGroupC0  -wi-a- 200.00G
   nered3        VolGroupC0  -wi-a-   1.46T
   nered4        VolGroupC0  -wi-a-  20.00G
   nered5        VolGroupC0  -wi-a-  20.00G
   nered6        VolGroupC0  -wi-a-  20.00G
   nered7        VolGroupC0  -wi-a-  20.00G
   sasnered0     VolGroupSAS -wi-a-   8.00G
   sasnered1     VolGroupSAS -wi-a-   8.00G

[root node03 cache]# vgs
   VG          #PV #LV #SN Attr   VSize  VFree
   VolGroupC0    1   0   0 wz--nc  3.41T 3.41T
   VolGroupC1    1   0   0 wz--nc  3.41T 3.41T
   VolGroupSAS   1   2   0 wz--nc 20.63G 4.63G

[root node03 lvm]# lvs
   LV          VG          Attr   LSize Origin
   sasnered0   VolGroupSAS -wi-a-   8.00G
   sasnered1   VolGroupSAS -wi-a-   8.00G


here is my lvm.conf:

[root node01 lvm]# lvm dumpconfig
   devices {
   	dir="/dev"
   	scan="/dev"
   	preferred_names=[]
   	filter=["a|^/dev/mapper/controller0$|",
"a|^/dev/mapper/controller1$|", "a|^/dev/mapper/sas-xen$|", "r|.*|"]
   	cache_dir="/etc/lvm/cache"
   	cache_file_prefix=""
   	write_cache_state=0
   	sysfs_scan=1
   	md_component_detection=1
   	md_chunk_alignment=1
   	ignore_suspended_devices=0
   }
   dmeventd {
   	mirror_library="libdevmapper-event-lvm2mirror.so"
   	snapshot_library="libdevmapper-event-lvm2snapshot.so"
   }
   activation {
   	missing_stripe_filler="error"
   	reserved_stack=256
   	reserved_memory=8192
   	process_priority=-18
   	mirror_region_size=512
   	readahead="auto"
   	mirror_log_fault_policy="allocate"
   	mirror_device_fault_policy="remove"
   }
   global {
   	library_dir="/usr/lib64"
   	umask=63
   	test=0
   	units="h"
   	activation=1
   	proc="/proc"
   	locking_type=3
   	fallback_to_clustered_locking=1
   	fallback_to_local_locking=1
   	locking_dir="/var/lock/lvm"
   }
   shell {
   	history_size=100
   }
   backup {
   	backup=1
   	backup_dir="/etc/lvm/backup"
   	archive=1
   	archive_dir="/etc/lvm/archive"
   	retain_min=10
   	retain_days=30
   }
   log {
   	verbose=0
   	syslog=1
   	overwrite=0
   	level=0
   	indent=1
   	command_names=0
   	prefix="  "
   }


Note that logical volumes form C1 were present on node01 and node02,
but after the node03 joined cluster they dissapeared. I'm running
CentOS 5.3.

This is really dissapointing. Enterprise Linux? Linux maybe but not
Enterprise... After much trouble with linux dm-multipath issues with my
storage - which are unresolved and are waiting for RHEL 5.4, now clvmd.

Note that locking (DLM), cman, rgmanager, qdisk and all the other
cluster services are working without problems. I just don't get it why
is CLVM behaving this way?

I'm thinking about switching to non-clustered LVM - but are there
issues with possible corruption of metadata? I won't create any new
volumes nor snapshots or anything similar. Setup is done and it should
work like this for the extended period of time.... But are there issues
with activation or something else changing metadata?



You need to mark the shared VGs clustered using the command

# vgchange -cy <VGname>


If you created them while clvmd was active then this is the default. If not then you will have to add it yourself as above.


Chrissie



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]