[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Advice on Storage Hardware



On 11/11/05, Michael Will <mwill penguincomputing com> wrote:
> I don't see why that should be a problem, thats a common solution we
> recommend to
> customers even without GFS, doing just ordinary NFS and using heartbeat,
> active/passive
> is uncritical.You can even have two NFS mounts of separate partitions
> and have an active/active
> that fails over the missing one if one of the two machines goes down.
> This means you mount one
> from one IP address and the other from the other, and the IP address
> gets migrated over.
>
> This can even be done with SCSI attached storage (4TB per enclosure, up
> to two connected to
> a 1U server), but of course the fibre attached storage (direct attached
> as well as a complete san)
> is considered more reliable and performant.
>
> Michael Will

Thanks Michael. We had a call with our hardware vendor and spoke about
the SAN solution after which they sent us a working estimate of costs
for a 7 server cluster. If we are able to use the aforementioned
storage solution in place of a fibre SAN we stand to save a
significant amount of money for an application that our IT department
just doesn't think it's necessary.

We've got another call with them next week where they are going to try
to convince us that we're wrong and need to buy the SAN, I just want
to make sure I don't look like an idiot when they get one of their
engineers on the call and explain where I'm wrong and I'm not able to
defend myself.

The reason I'm planning on using GNBD and GFS instead of NFS is that
the storage will be used in part by a proprietary application that
doesn't support storage on NFS. I also need to allow the webservers to
mount the same data and from what I understand locking would be a
problem on an NFS mount.

Thanks again.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]