[linux-lvm] Testing the new LVM cache feature

Mike Snitzer snitzer at redhat.com
Fri May 30 13:55:29 UTC 2014


On Fri, May 30 2014 at  9:46am -0400,
Richard W.M. Jones <rjones at redhat.com> wrote:

> I have now set both read_promote_adjustment ==
> write_promote_adjustment == 0 and used drop_caches between runs.
> 
> I also read Documentation/device-mapper/cache-policies.txt at Heinz's
> suggestion.
> 
> I'm afraid the performance of the fio test is still not the same as
> the SSD (4.8 times slower than the SSD-only test now).

Obviously not what we want.  But you're not doing any repeated IO to
those blocks.. it is purely random right?

So really, the cache is waiting for blocks to get promoted from the
origin if the IOs from fio don't completely cover the cache block size
you've specified.

Can you go back over those settings?  From your dmsetup table output you
shared earlier in the thread you're using a blocksize of 128 sectors (or
64K).  And your fio random write workload is using 64K.

So unless you have misaligned IO you _should_ be able to avoid reading
from the origin.  But XFS is in play here.. I'm wondering if it is
issuing IO differently than we'd otherwise see if you were testing
against the block devices directly...
 
> Would repeated runs of (md5sum virt.* ; echo 3 > /proc/sys/vm/drop_caches)
> not eventually cause the whole file to be placed on the SSD?
> It does seem very counter-intuitive if not.

If you set read_promote_adjustment to 0 it should pull the associated
blocks into the cache.  What makes you think it isn't?  How are you
judging the performance of the md5sum IO?  Do you see IO being issued to
the origin via blktrace or something?




More information about the linux-lvm mailing list