[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] how to recover after thin pool metadata did fill up?



On Thu, Oct 18, 2012 at 03:28:17PM +0200, Spelic wrote:
> So, supposing one is aware of this problem beforehand, at pool creation
> can this problem be worked around by using --poolmetadatasize  to
> make a metadata volume much larger than the default?
> 
> And if yes, do you have any advice on the metadata size we should use?

There are various factors that effect this.

- block size
- data dev size
- nr snapshots

The rule of thumb I've been giving is:

work out the number of blocks in your data device.  ie. data_dev_size / data_block_size.
Then multiply by 64.  This gives the metadata size in bytes.

The above calculation should be fine for people who're primarily using
thinp for thin provisioning, and not lots of snapshots.  I recommend
these people use a large block size.  eg, 4M.  I don't think this is
what lvm does by default (at one point it was using a block size of
64k).

Snapshots require extra copies of the metadata for devices.  Often the
data is shared, but the btrees for the metadata diverge as cow
exceptions occur.  So if you're using snapshots allocate more space.
This is compounded by the fact that it's often better to use small
block sizes for snapshots.

- Joe


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]