[dm-devel] [PATCH for-3.14] dm cache: add total cache blocks to status output

Mike Snitzer snitzer at redhat.com
Thu Jan 9 22:13:23 UTC 2014


On Thu, Jan 09 2014 at  4:28pm -0500,
Brassow Jonathan <jbrassow at redhat.com> wrote:

> Yes, that'd be nice if we could have this.
> 
> It would be great if chunk_size (i.e. cache block size) was also
> included somehow.  If I had that, I could calculate the size of the
> cache using the status output alone.  I won't complain much if it
> isn't there though, because I can get that from the mapping table.
> 
> The reason I like adding the total number of cache blocks is because I
> /can't/ get that information from either type of status (_INFO or
> _TABLE) for the cache target.  Instead, I would have to get the cache
> device from the mapping table and query that device for its size -
> possible, but the level of indirection is a pain.  As it sits in the
> kernel today, it seems strange to provide some information, but not
> enough to fill in the whole picture.

So ideally you want both the cache blocksize and the metadata blocksize.
We can easily add them, wouldn't be the end of the world.  What format
seems best?

<#used metadata blocks>/<#total metadata blocks> <metadata block size>
<#used cache blocks>/<#total cache blocks> <cache block size>
...

or:

<metadata block size> <#used metadata blocks>/<#total metadata blocks>
<cache block size> <#used cache blocks>/<#total cache blocks>
...

or something else?

(fyi, these status blocksizes would be expressed in 512b sectors)

> Speaking of values I can't compute with the _INFO and _TABLE status...
> "block size" does not mean the same thing for the metadata and data
> numbers - one is in chunk_size and the other is in something else
> (neither seem to be in sectors, as is DM custom).

The cache's block_size is in terms of 512b sectors during table load and
it is then stored in cache->sectors_per_block so I'm not sure what you
mean.

As for cache metadata block size, it is fixed at 4K (just like
dm-thin-metadata).  And yes, DM_CACHE_METADATA_BLOCK_SIZE is in bytes,
not sectors... not a big deal as we convert it to sectors when storing
it in the metadata's superblock. *shrug*

> Honestly, I'm not
> very sure why the ratios are provided for the metadata area... who
> cares?  Is it info we don't need?  No-one has ever asked if the RAID
> or mirror log areas are mostly full.  I don't need to worry about
> overfilling, do I?

If the metadata device isn't sized appropriately you can run out of
metadata space.  But in general, yes the metadata use is fixed based on
the size of the particular cache device and chosen policy (due to policy
hint size).

So we _could_ add a negative check that warns/errors to the user if the
provided metadata device isn't adequate for addressing the entire cache
device.




More information about the dm-devel mailing list