[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] Very slow i/o after snapshotting

Dne 9.7.2013 14:43, Micky napsal(a):
Thanks for the quick response.

It is working fine without the LVM. It is just the snapshot which
makes the things slow. For instance, take a snapshot of an LV that's
running a DomU -- after that point on, the load inside that DomU will
start to increase until the snapshot is removed. Same is with KVM. The
only difference without hypervisor is, it is not terribly slow but
still lags, like, you get 15MB/s after snapshot and ~90-100MB/s
without snapshot being on the same volume!

Even SSH becomes slow, output freezes and you get sudden outbursts
after a few seconds.

As for the reason of copying big chunks out of LVs -- it is simple --
copy on write magic as short term image backup strategy! But I
realized that the magic with LVM comes with a price and that is I/O
latency ;)

Do you write to the snapshot ?

It's known FACT that performance of old snapshot is very far from being ideal - it's very simply implementation - for a having consistent system to make a backup of the volume - so for backup it doesn't really matter how slow is that (it just needs to remain usable)

You have just very simple list of blocks stored in COW devices together with list of exceptions. And modified blocks are simple copied first (so there is no reference counting or anything resembling that...)

Using 512kB chunks is actually the worst idea for old snaps (well in fact any snaps) especially if they are coded in that way with exception store.

I'd suggest to go with much smaller chunks - i.e. 4-8-16KB - since if you update a single 512 sector - 512KB of data has to be copied!!! so really bad idea, unless you overwrite large continuous portion of a device.

And yes - if you have rotational hdd - you need to expect horrible seek times as well when reading/writing from snapshot target....

And yes - there are some horrible Segate hdd drives (as I've seen just yesterday) were 2 disk reading programs at the same time may degrade 100MB/s -> 4MB/s (and there is no dm involved)


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]