[linux-lvm] Testing the new LVM cache feature

Richard W.M. Jones rjones at redhat.com
Fri May 30 09:04:22 UTC 2014


On Thu, May 29, 2014 at 05:58:15PM -0400, Mike Snitzer wrote:
> On Thu, May 29 2014 at  5:19pm -0400, Richard W.M. Jones <rjones at redhat.com> wrote:
> > I'm concerned that would delete all the data on the origin LV ...
> 
> OK, but how are you testing with fio at this point?  Doesn't that
> destroy data too?

I'm testing with files.  This matches my final configuration which is
to use qcow2 files on an ext4 filesystem to store the VM disk images.

I set read_promote_adjustment == write_promote_adjustment == 1 and ran
fio 6 times, reusing the same test files.

It is faster than HDD (slower layer), but still much slower than the
SSD (fast layer).  Across the fio runs it's about 5 times slower than
the SSD, and the times don't improve at all over the runs.  (It is
more than twice as fast as the HDD though).

Somehow something is not working as I expected.

Back to an earlier point.  I wrote and you replied:

> > What would be bad about leaving write_promote_adjustment set at 0 or 1?
> > Wouldn't that mean that I get a simple LRU policy?  (That's probably
> > what I want.)
>
> Leaving them at 0 could result in cache thrashing.  But given how
> large your SSD is in relation to the origin you'd likely be OK for a
> while (at least until your cache gets quite full).

My SSD is ~200 GB and the backing origin LV is ~800 GB.  It is
unlikely the working set will ever grow > 200 GB, not least because I
cannot run that many VMs at the same time on the cluster.

So should I be concerned about cache thrashing?  Specifically: If the
cache layer gets full, then it will send the least recently used
blocks back to the slow layer, right?  (It seems obvious, but I'd like
to check that)

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html




More information about the linux-lvm mailing list