[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: [Linux-cluster] More GFS2 tuning...

> -----Original Message-----
> From: linux-cluster-bounces redhat com 
> [mailto:linux-cluster-bounces redhat com] On Behalf Of Corey Kovacs
> Sent: Monday, February 16, 2009 1:55 PM
> To: linux-cluster redhat com
> Subject: [Linux-cluster] More GFS2 tuning...
> By my reckoning, I should be able to see 400MB or more 
> sustained throughput using this setup. If this is a pipe 
> dream, someone let me know quick before I go nutz.

What do you get from the raw device?  (I.e. if you remove GFS/NFS from
the picture.)

> The bo values start at around 200MB, then drop down to 0 in 
> most cases for a few seconds, then spike to ~700MB/s then 
> eases back down to 200, 150 and back down to 0. It looks very 
> much like a cacheing issue to me.

Linux virtual memory does some funny things with fs caching.  Try some
tests with O_DIRECT to bypass the buffer cache.  On RHEL 5 systems, you
can achieve that with "dd ... oflag=direct" and varying block sizes.

> I've read that GFS2 is supposed to be "self tuning" but I 
> don't think these are necessarily GFS2 issues.

Agreed.  If you can experiment with the hardware, what do you get from
other fs types?  (such as ext3)

> Anyone have something similar? What I/O rates are people getting?

I don't have any FC hardware quite as nice as yours, but multipathing
AoE over a pair of GigE connections we can get 200MB/s raw, sequential
throughput.  (I.e. about the limits of the interconnects.)

My GFS filesystems are mostly a collection of very small (~1MB or less)
files, so, it's hard to say how they are performing.  I'm much more
concerned about the rate of file creates over GFS than raw throughput
right now...


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]