[Linux-cluster] I/O scheduler and performance

Wendy Cheng wcheng at redhat.com
Wed Jul 5 04:59:08 UTC 2006


On Wed, 2006-07-05 at 14:30 +1000, RR wrote:
> Hi Wendy,
> 
> thanks for the prompt response. I see what you're saying. Just a few
> things to clarify. The databases I have are clustered active-passive,
> so only one machine accesses the store at any given time with a
> persistent connection to the SAN.
> Also, yes, I would think that for this particular application the IO
> pattern might be very close to parallel in nature as essentially all
> cluster nodes will run the same application accessing the same store
> but may rarely access the same folder at the same time and if they do,
> it would be independantly of each other. I guess the community
> involved with the development of this application isn't too familiar
> with clustered filesystems and they may be considering database
> storage over shared filesystems such as NFS or something but they seem
> to suggest that database storage offers better scalability and less
> administrative overhead. I do care about the administrative overhead
> but performance is a bigger criteria. The other thing I should point
> out is that whereas the clustered databases use HBAs to access the SAN
> the linux cluster nodes running the application will access the SAN
> using GigE NICs. The performance and CPU overhead of not being able to
> use HBAs might be an added factor, do you think?

Why do you think cluster nodes accessing SAN using GigE NICs ? It is a
mis-understanding. GFS accesses its storage via (fibre channel based)
HBAs unless you configure the system using GNBD or ISCSI). 

> 
> I'm totally neutral about either solution, I just want the best
> performance with whatever I go with, so I wonder if a database person
> on the list can give their view as well?
> 

Again, you *need* to benchmark your workload before making any decision.
We're certainly working hard to improve GFS but it is not a cure-all
solutions. Same sentence - workload dependent and test your
configuration before locking into anything ...

-- Wendy




More information about the Linux-cluster mailing list