[linux-lvm] LVM commands extremely slow during raid check/resync

Larkin Lowrey llowrey at nuclearwinter.com
Mon Mar 26 23:51:08 UTC 2012


That helped bring the lvcreate time down from 2min to 1min so that's an
improvement.  Thank you.

The source of the remaining slowdown is the writing of metadata to my 4
PVs. The writes are small and the arrays are all raid5 so each metadata
write is also requiring a read. I'm still at a loss for why this was not
a problem when running F15 but the filter is a workable solution for me
so I'll leave it alone.

--Larkin

On 3/26/2012 3:55 PM, Ray Morris wrote:
> Put -vvvv on the command and see what takes so long. In our case, 
> it was checking all of the devices to see if they were PVs.
> "All devices" includes LVs, so it was checking LVs to see if they
> were PVs, and activating an LV triggered a scan in case it was 
> a PV, so activating a volume group was especially slow (hours).
> The solution was to use "filter" in lvm.conf like this:
>
> filter = [ "r|^/dev/dm.*|", "r|^/dev/vg-.*|","a|^/dev/sd*|", "a|^/dev/md*|", "r|.*|" ]
>
> That checks only /dev/sd* and /dev/md*, to see if they are PVs, 
> skipping the checks of LVs to see if they are also PVs. Since the
> device list is cached, use vgscan -vvvv to check that it's checking 
> the right things and maybe delete that cache first. My rule IS 
> a bit redundant because I had trouble getting the simpler form 
> to do what I wanted. I ended up using a belt and suspenders 
> approach, specifying both "do not scan my LVs" and "scan only
> /dev/sd*".




More information about the linux-lvm mailing list