[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS2 performance on large files

On Thu, 23 Apr 2009 15:19:38 +0200, Piotr Baranowski <divi divinet pl>

>> > To pick the optimum RAID block size, look at the disks. What is the
>> > multi-sector transfer size they can handle? I have not seen any disks
>> > to date that have this figure at anything other than 16, and
>> > 16sectors * 512 bytes/sector = 8KB.
>> > 
>> > So set the RAID block size to 8KB.
>> Is this "chunk size" ?  Or would chunk size be #disks*8KB ?
>> > This is also why quite frequently a cheap box made of COTS components
>> > can complete blow away a similar enterprise grade box with 10-100x
>> > the price tag.
>> In fairness, those enterprise boxes typically have dual redundant 
>> controllers with mirrored cache, and other failure-resistant goodies you

>> can't really do with COTS hardware. ;)
> I don't want to stir the hornet's nest but just look at that:
> http://linux.yyz.us/why-software-raid.html
> There is no clear winner in that bet, but knowing pros and cons of both
> approaches helps choose the best.

I don't see how that contradicts what I said. If we assume that a decent
RAID controller will take full advantage of the disk's multi-sector
transfer capability (which I have to assume is the case or else my
opinion of RAID controller vendors would mostly be limited to pondering
how they have managed to stay in business so far), than everything I said
still applies. Just because you're pushing the multi-sector and chunk size
down to a lower level doesn't mean that everything else doesn't still

Oh, and I consider RAID controllers to be included in COTS. At no point
did I mention advantages of hardware vs. software RAID - either will do,
and the benefits of knowing what one is doing when it comes to file
system layout are still the same.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]