[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS2 performance on large files

On Thu, 23 Apr 2009 15:09:23 +0200, Christopher Smith
<csmith nighthawkrad net> wrote:
> Gordan Bobic wrote:
>> To pick the optimum RAID block size, look at the disks. What is the
>> multi-sector transfer size they can handle? I have not seen any disks
>> to date that have this figure at anything other than 16, and
>> 16sectors * 512 bytes/sector = 8KB.
>> So set the RAID block size to 8KB.
> Is this "chunk size" ?  Or would chunk size be #disks*8KB ?

In the case of software RAID, yes, it is referred to as chunk size.

>> This is also why quite frequently a cheap box made of COTS components
>> can complete blow away a similar enterprise grade box with 10-100x
>> the price tag.
> In fairness, those enterprise boxes typically have dual redundant 
> controllers with mirrored cache, and other failure-resistant goodies you 
> can't really do with COTS hardware. ;)

Maybe so, but you can still build two complete COTS boxes with
no internal redundancy for a fraction of the cost and deal with
mirroring and fail-over on server level. Enter RHCS and DRBD. :)
Why compromise?


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]