[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[linux-lvm] Re: IO scheduler, queue depth, nr_requests




Miquel van Smoorenburg wrote:


On Thu, 19 Feb 2004 11:19:15, Jens Axboe wrote:

On Thu, Feb 19 2004, Miquel van Smoorenburg wrote:


Shouldn't the controller itself be performing the insertion?

Well, you would indeed expect the 3ware hardware to be smarter than
that, but in its defence, the driver doesn't set sdev->simple_tags or
sdev->ordered_tags at all. It just has a large queue on the host, in
hardware.

A too large queue. IMHO the simple and correct solution to your problem
is to diminish the host queue (sane solution), or bump the block layer
queue size (dumb solution).


Well, I did that. Lowering the queue size of the 3ware controller to 64 does help a bit, but performance is still not optimal - leaving it at 254 and increasing the nr_requests of the queue to 512 helps the most.

But the patch I posted does just as well, without any tuning. I changed
it a little though - it only has the "new" behaviour (instead of blocking
on allocating a request, allocate it, queue it, _then_ block) for WRITEs.
That results in the best performance I've seen, by far.



That's because you are half introducing per-process limits.


Now the style of my patch might be ugly, but what is conceptually wrong
with allocating the request and queueing it, then block if the queue is
full, versus blocking on allocating the request and keeping a bio
"stuck" for quite some time, resulting in out-of-order requests to the
hardware ?



Conceptually? The concept that you have everything you need to continue and yet you block anyway is wrong.

Note that this is not an issue of '2 processes writing to 1 file', really.
It's one process and pdflush writing the same dirty pages of the same file.



pdflush is a process though, that's all that matters.




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]