[dm-devel] [Lsf-pc] [LSF/MM TOPIC] a few storage topics

Dave Chinner david at fromorbit.com
Thu Jan 26 22:31:11 UTC 2012


On Tue, Jan 24, 2012 at 01:05:50PM -0500, Jeff Moyer wrote:
> Andreas Dilger <adilger at dilger.ca> writes:
> I've been wondering if it's gotten better, so decided to run a few quick
> tests.
> 
> kernel version 3.2.0, storage: hp eva fc array, i/o scheduler cfq,
> max_sectors_kb: 1024, test program: dd
> 
> ext3:
> - buffered writes and buffered O_SYNC writes, all 1MB block size show 4k
>   I/Os passed down to the I/O scheduler
> - buffered 1MB reads are a little better, typically in the 128k-256k
>   range when they hit the I/O scheduler.
> 
> ext4:
> - buffered writes: 512K I/Os show up at the elevator
> - buffered O_SYNC writes: data is again 512KB, journal writes are 4K
> - buffered 1MB reads get down to the scheduler in 128KB chunks
> 
> xfs:
> - buffered writes: 1MB I/Os show up at the elevator
> - buffered O_SYNC writes: 1MB I/Os
> - buffered 1MB reads: 128KB chunks show up at the I/O scheduler
> 
> So, ext4 is doing better than ext3, but still not perfect.  xfs is
> kicking ass for writes, but reads are still split up.

Isn't that simply because the default readahead is 128k? Change the
readahead to be much larger, and you should see much larger IOs
being issued....

Cheers,

Dave.
-- 
Dave Chinner
david at fromorbit.com




More information about the dm-devel mailing list