[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS2 performance on large files



On Thu, 23 Apr 2009 15:41:45 +0200, Christopher Smith
<csmith nighthawkrad net> wrote:
> Gordan Bobic wrote:
>> On Thu, 23 Apr 2009 15:09:23 +0200, Christopher Smith
>> <csmith nighthawkrad net> wrote:
>>>> This is also why quite frequently a cheap box made of COTS components
>>>> can complete blow away a similar enterprise grade box with 10-100x
>>>> the price tag.
>>> In fairness, those enterprise boxes typically have dual redundant 
>>> controllers with mirrored cache, and other failure-resistant goodies
you
>>>
>>> can't really do with COTS hardware. ;)
>> 
>> Maybe so, but you can still build two complete COTS boxes with
>> no internal redundancy for a fraction of the cost and deal with
>> mirroring and fail-over on server level. Enter RHCS and DRBD. :)
>> Why compromise?
> 
> Not every service is failover-friendly at the server level. ;)

Sure, but then again, there aren't many and unless you are talking
about the equipment on the order of 1000x more expensive, some downtime
will eventually happen anyway (some things are rather hard to work
around). So best protect yourself from that on a higher level where
things are cheaper, easier to support and easier to do something about.

> Don't get me wrong, I'm 100% behind using COTS stuff wherever possible, 
> and setups with DRBD, et al, have worked very well for us in several 
> locations.  But there are some situations where it just doesn't (eg: SAN 
> LUNs shared between multiple servers - unless you want to forego the 
> performance benefits of write caching and DIY with multiple machines, 
> DRBD and iscsi-target).

I'm not sure write-caching is that big a deal - your SAN will be caching
all the writes anyway. Granted, the cache will be about 0.05ms further
away than it would be on a local controller, but then again, the
clustering overheads will relegate that into the realm of irrelevance.
I have yet to see a shared-SAN file system that doesn't introduce
performance penalties big enough to make the ping time to SAN a drop
in the ocean.

> (There's also the labour, maintenance and support costs of DIY vs 
> plug-in-and-go to consider, as well.)

That's another contentious point. You'll pay for that labour
several times over in increased hardware costs, vendor support
contracts (I've yet to find one that actually provides anything
more useful than the purely imaginary backside covering),
reduced performance, less transparency and the consequences
of a misguided belief of a growing number of individuals that
they can control and maintain a complex system by clicking
on pretty pictures.

Just being a devil's advocate. ;)

Gordan


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]