[dm-devel] DM MULTIPATH: Allow dm to send larger request if underlying device set to larger max_sectors value
Mike Snitzer
snitzer at redhat.com
Mon Jul 9 13:16:11 UTC 2012
On Mon, Jul 09 2012 at 9:00am -0400,
Mike Snitzer <snitzer at redhat.com> wrote:
> On Sun, Jul 08 2012 at 1:59pm -0400,
> Chauhan, Vijay <Vijay.Chauhan at netapp.com> wrote:
>
> > Even though underlying paths are set with larger value for max_sectors, dm
> > sets 1024(i.e 512KB) for max_sectors as default. max_sectors for dm
> > device can be reset through sysfs but any time map is updated, max_sectors
> > is again set back to default. This patch gets the minimum of max_sectors from
> > physical paths and sets it to dm device.
>
> There shouldn't be any need for additional DM overrides for max_sectors.
>
> DM will stack the limits for all underlying devices each table reload
> (via dm_calculate_queue_limits). And max_sectors is properly stacked in
> the block layer's bdev_stack_limits (called by dm_set_device_limits).
>
> So is something resetting max_sectors with sysfs? multipathd?
BLK_DEF_MAX_SECTORS = 1024
blk_set_stacking_limits: lim->max_sectors = BLK_DEF_MAX_SECTORS
But that just establishes the default, the stacking done by
blk_stack_limits will reduce 'max_sectors' accordingly based on the
underlying paths' max_sectors.
I can clearly see that max_sectors is reduced according to the
underlying device(s):
# multipath -ll
mpathe (36003005700ec1890167a7e5953effb87) dm-5 LSI,RAID 5/6 SAS 6G
size=465G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 0:2:4:0 sde 8:64 active ready running
# cat /sys/block/sde/queue/max_sectors_kb
240
# cat /sys/block/dm-5/queue/max_sectors_kb
240
More information about the dm-devel
mailing list