[Vdo-devel] [lvm-team] Trying to test thin provisioned LVM on VDO

James Hogarth james.hogarth at gmail.com
Thu Aug 30 12:39:20 UTC 2018


On Wed, 11 Jul 2018 at 16:14, Bryan Gurney <bgurney at redhat.com> wrote:
>
> On Wed, Jul 11, 2018 at 9:57 AM, James Hogarth <james.hogarth at gmail.com> wrote:
> > On 11 July 2018 at 14:38, Nikhil Kshirsagar <nkshirsa at redhat.com> wrote:
> >> Hello,
> >>
> >> Would it be a good idea to document this in a kcs and also raise a bz
> >> preemptively?
> >>
> >> Regards,
> >> Nikhil.
> >
> > That's probably a good idea ... or if anyone does have a support
> > contract handy or can follow the appropriate channels internally to
> > follow it up at Red Hat then that would also be worth while...
>
> James,
>
> I just reproduced this on a RHEL 7.5 test system, so I'll be able to
> follow through on this.
>
> I created the following BZ for this issue:
> https://bugzilla.redhat.com/show_bug.cgi?id=1600156
>
>
> Thanks,
>
> Bryan
>
> >
> > For what it's worth the thin provisioning on VDO example is explicitly
> > listed here:
> >
> > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-qs-requirements
> >
> > But from what we now see of the code this really is not the best idea
> > at this time.
> >

Hi all,

Just wanted to quickly follow up on this, since I noted that there was
a vdo/kvdo commit in github (and fedora copr update) that reflected
this discussion.

I quickly spun up a thinpool on a vdo backing ... the chunk size was 256K

The default vdo max_discard_sectors was still 8

The same message popped up (as expected) that discards would not be
passed down etc

Then I echoed a 512 size to /sys/kvdo/max_discard_sectors and
reactivated the thinpool ... the same message occurred (not expected).

Then I did a fresh reboot to get back to "neutral" ... and this time
did a vdo stop -a ... change to sysfs ... vdo start -a (with the
required dance of mounts and LVM deactivation/reactivation) and did
*not* get the message discussed.

I'm guessing that setting must only take effect on activating/starting
a VDO volume?

Carried out a dd of random data into where the thin volume was
mounted, noted the increase in used blocks, removed the data, noted
that df was updated as expected, then did an fstrim on the mount point
and the VDO volume correctly displayed a drop in used blocks.

Finally to make this all nice and clean on a start (seeing as there's
no config value to just set max_discard_sectors) I knocked up a quick
systemd unit to run ahead of vdo.service to ensure that the correct
sysfs is in place before bringing the vdo pool online:

__________________________
cat /etc/systemd/system/vdo-sysfs.service
[Unit]
Description=Set sysfs tunables before activating vdo devices
Before=vdo.service

[Service]
Type=oneshot
ExecStart=/usr/sbin/modprobe kvdo
ExecStart=/usr/bin/bash -c "echo 512 > /sys/kvdo/max_discard_sectors"
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
RequiredBy=vdo.service
---------------------------------------------

Thanks for fixing this - looking forward to doing some testing in my
environment!

James




More information about the vdo-devel mailing list