[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Need advice on cluster configuration

(sorry david, i accidently responed to you rather then the list)

Thanks for your response. I'm setting up EL 3 w/ GFS and Clustering right now using gnbd as my shared device until the FC storage comes. I'm a little confused tho.

1.whats the relationship between the raw devices used in the cluster software (which can share raw networked storage w/out GFS?) and when your using GFS on top of it (or are the two unrelated)?

2. rgmanager, how is this different from the cluster software's fallback (failback?) domains and members taking over a service and a related floating ip from a fallen member? again thanks for your time


David Teigland wrote:

On Mon, Mar 28, 2005 at 04:15:25PM -0700, Daniel Cunningham wrote:

So I have been following this list for a while, and have set up a simple GFS/LVM cluster running on debian to test it out. I would like others opinions on this list about possible solutions.

We are a small company (25 employees) but have fairly substantial storage needs. Right now I have two 1.5 TB scsi arrays each connected to its own debian box ( scsi jbod array attached to a raid 5 card). The two boxes are running DRBD (think of it as network raid 1) synced over a crossover cable on GB ethernet. The boxes use heartbeat to share a floating ip address, ie if the primary goes down, the secondary will come up, take the ip address, mount its side of the storage and start nfs/smb services. Our colo does a tivoli backup every night for persistent and off site storage. This has worked great so far because...

1. if the primary fails, we have a live system again in 30 seconds
2. cheap (well , no , actually the dell scsi arrays were expensive but not the whole solution)

I'm now faced with growing storage pains, this solution is not scalable and we are quickly running out of space. Plus our system is now housing some 15 million files which is giving the tivoli system grief, not to mention if we had to restore, we would be out of business for days, if not for good.

What I'm envisioning is buying either a Storcase FC raid array (6.4 TB), maybe even Apples (5.6TB), with two GFS/CLVM boxes in front of it providing nfs and smb services. I would also like to purchase another one with more space just for backups (with off site backups going to the old system at another location for a while). So my questions are

1. Is the redhat/sistina solution right for this?

Sounds about right.  You won't really know until you experiment -- put
together small test cluster and see how it works.

2. Is there a better solution (for my scenario)

Don't know

3. what thoughts on backup do others have ( I prefer open source, but we look at commercial)

Big unknown; I wouldn't expect GFS to make your backup any easier, in fact
it might be more difficult.

4. Problems with mounts larger then 2TB and nfs clients? (ie 2.4 problems)

Don't know about nfs

5. (to redhat) when is the 6.1 availability? ( for AS 4.0 with 2.6 for partitions > 2TB)

Don't know, but I'd guess a couple months

6. How do I achieve the same nfs/smb/ip failover as I do with heartbeat (really important!)

We have some "rgmanager" software (that was recently renamed) that does
this kind of stuff.  You can run nfs servers on all the gfs nodes at once
if your application doesn't depend on nfs locks -- so the server doesn't
need to failover, but clients from the dead server do.  You can't run SMB
in parallel on gfs unfortunately so you still need to failover that one.

7. any other words of wisdom :-)

I have been really whiting my head against the wall on this. Any help would be greatly appreciated.

Never having done anything like you are it's difficult to give you a good
response -- sales people may have better answers!  A small test setup is
how I'd reccommend answering most of these.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]