[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [dm-devel] [Lsf-pc] [LSF/MM TOPIC] a few storage topics



On Tue, Jan 24, 2012 at 11:56:31AM -0500, Christoph Hellwig wrote:
> That assumes the 512k requests is created by merging.  We have enough
> workloads that create large I/O from the get go, and not splitting them
> and eventually merging them again would be a big win.  E.g. I'm
> currently looking at a distributed block device which uses internal 4MB
> chunks, and increasing the maximum request size to that dramatically
> increases the read performance.

Depends on the device though, if it's a normal disk, it likely only
reduces the number of dma ops without increasing performance too
much. Most disks should reach platter speed at 64KB, so larger request
only saves a bit of cpu in interrutps and stuff.

But I think nobody here was suggesting to reduce the request size by
default. cfq should easily notice when there are multiple queues that
are being submitted in the same time range. A device in addition to
specifying the max request dma size it can handle it could specify the
minimum it runs at platter speed and cfq could degrade to it when
there's multiple queues running in parallel over the same millisecond
or so. Reads will return in the I/O queue almost immediately but
they'll be out for a little while until the data is copied to
userland. So it'd need to keep it down to the min request size the
device allows to reach platter speed, for a little while. Then if no
other queue presents itself it double up the request size for each
unit of time until it reaches the max again. Maybe that could work, maybe
not :). Waiting only once for 4MB sounds better than waiting every
time 4MB for each 4k metadata seeking read.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]