[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] poor read performance on rbd+LVM, LVM overload

Hi, I'm back from trip, sorry for thread pause, wanted to wrap this up.
I reread thead, but actually do not see what could be done from admin
side to tune LVM for better read performance on ceph(parts of my LVM
config included below). At least for already deployed LVM.
It seems there is no clear agreement why io is lost, so, it seems that
LVM is not recommended on ceph rbd currently.

In case there is still hope for tuning here follows info.
Mike wrote:
"Should be pretty straight-forward to identify any limits that are
different by walking sysfs/queue, e.g.:
grep -r . /sys/block/rdbXXX/queue
grep -r . /sys/block/dm-X/queue

Here it is
# grep -r . /sys/block/rbd2/queue/
/sys/block/rbd2/queue/scheduler:noop [deadline] cfq

# grep -r . /sys/block/dm-2/queue/

Chunks of /etc/lvm/lvm.conf if this helps
devices {
    dir = "/dev"
    scan = [ "/dev/rbd" ,"/dev" ]
    preferred_names = [ ]
    filter = [ "a/.*/" ]
    cache_dir = "/etc/lvm/cache"
    cache_file_prefix = ""
    write_cache_state = 0
    types = [ "rbd", 250 ]
    sysfs_scan = 1
    md_component_detection = 1
    md_chunk_alignment = 1
    data_alignment_detection = 1
    data_alignment = 0
    data_alignment_offset_detection = 1
    ignore_suspended_devices = 0
activation {
    udev_sync = 1
    udev_rules = 1
    missing_stripe_filler = "error"
    reserved_stack = 256
    reserved_memory = 8192
    process_priority = -18
    mirror_region_size = 512
    readahead = "none"
    mirror_log_fault_policy = "allocate"
    mirror_image_fault_policy = "remove"
    use_mlockall = 0
    monitoring = 1
    polling_interval = 15

Hope something can be done still, or I will have to move several TB
off the LVM :)
Anyway, it does not feel like the problem cause is clear. May be I
need to file a bug if that is relevant, but where to?


2013/10/21 Mike Snitzer <snitzer redhat com>:
> On Mon, Oct 21 2013 at  2:06pm -0400,
> Christoph Hellwig <hch infradead org> wrote:
>> On Mon, Oct 21, 2013 at 11:01:29AM -0400, Mike Snitzer wrote:
>> > It isn't DM that splits the IO into 4K chunks; it is the VM subsystem
>> > no?
>> Well, it's the block layer based on what DM tells it.  Take a look at
>> dm_merge_bvec
>> >From dm_merge_bvec:
>>       /*
>>          * If the target doesn't support merge method and some of the devices
>>          * provided their merge_bvec method (we know this by looking at
>>          * queue_max_hw_sectors), then we can't allow bios with multiple vector
>>          * entries.  So always set max_size to 0, and the code below allows
>>          * just one page.
>>          */
>> Although it's not the general case, just if the driver has a
>> merge_bvec method.  But this happens if you using DM ontop of MD where I
>> saw it aswell as on rbd, which is why it's correct in this context, too.
> Right, but only if the DM target that is being used doesn't have a
> .merge method.  I don't think it was ever shared which DM target is in
> use here.. but both the linear and stripe DM targets provide a .merge
> method.
>> Sorry for over generalizing a bit.
> No problem.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]