[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

50% speed penalty for LVM on mdraid



I have an F7/rawhide machine with 6x400GB SATA II disks all
partitioned as 100MB + 399.9GB

/boot is on /dev/sda1 (would be RAID1 across /dev/sd[abc]1 partitions
except for mkinitrd raid1.ko breakage) with /dev/md1 as RAID5 on
/dev/sd[abcdef]2

Then a single LVM vg00 on top of /dev/md1

root and swap as LVs within vg00 and plenty of spare space.

Doing some timings of the various block devices (so far just roughly
with hdparm, bonnie installed for more detail later). Results of
"hdparm -Tt" averaged over a few runs, showing cached and buffered
speeds

/dev/sda1 gives 1995MB/s and 72MB/s which seems quite good for a single spindle
/dev/md1 gives 2040MB/s and 260MB/s also fairly good (I had hoped with
six spindles and parity to get closer to 5x performance than 3.5x)
/dev/mapper/vg00-lv01 (my root fs) gives 2100MB/s and 135MB/s which is
a little disappointing

Nearly a 50% speed penalty seems a heavy price to pay for LVM, is
there any slow debug code currently in rawhide that might explain it?
Could I have some bad choices of block sizes between RAID/LVM layers
which are reducing throughput by splitting reads? Anything  else?


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]