[linux-lvm] LVM Thin Provisioning size limited to 16 GiB?

Sebastian Riemer sebastian.riemer at profitbricks.com
Fri Mar 2 13:44:17 UTC 2012


Hi list,

I've tested LVM thin provisioning with the latest LVM user-space from
git and today together with kernel 3.2.7.

I've got 24 SAS HDDs put together into 12 MD RAID-1 arrays. So I want to
have a thin pool with striping over all RAID-1 arrays. But this seems to
be size limited to 16 GiB. With bigger size the pool can't be activated
and LVM can't be removed any more - forces me to reboot.

I've also tested to explicitly set the --poolmetadatasize to 16 GiB and
the data pool to 100 GiB, but same result. I also did some benchmarks.
Performance wasn't that bad, but could be really better (at least doubled).

Is this the current development state or do I do something wrong?

Here are my commands:
   vgcreate test /dev/md/test*
   lvcreate -i 12 -I 64 -L 16G -T test/pool
   lvcreate -V 45G -T test/pool -n test00

Furthermore, when writing and afterwards reading to/from the thin LV it
is only possible with up to 11 GiB. Then there are messages like the
following in the kernel log.

   device-mapper: space map metadata: out of metadata space
   device-mapper: thin: dm_thin_insert_block() failed

Seems like pool meta-data and pool data aren't separated at current
development state.

Regards,

Sebastian Riemer




More information about the linux-lvm mailing list