[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] high availability with GFS

oops, forgot to send this to the mailing-list, so here it is again...

Hi Michael,

Thanks for your reply. What you said is basically what we want
to achieve.

Our shared storage device consists atm of 5 nodes with
one physical disk each. We want all of these 5 nodes to be visible as
a single logical volume.
We think we can achieve this with GNBD and LVM by exporting the disks as
GNBD-Devices on each node.
We'll then import them on the master-server and pool them together with LVM or
pool_tool. that way the secondary-server should be able to access it
as well. but as I said earlier without any redundancy. If one node fails,
we are grounded. We need to get some redundancy in this or we have to look
for another solution like DRBD, but there scalability is a real problem.

I read the Samba/GFS-Thread but we're not that far yet. We'll see
about this once (if ;) ) we get that far with GFS.

Thanks again for your help


MG> Hello,

MG> 	From your e-mail I am really not sure what your intended goal is ? Do
MG> you have a shared storage device you want to make accessible through
MG> what ever Samba server is master ?

MG> Check the list archives ... there is a issue with Samba and GFS, 
MG> something about how Samba caches file metadata. Not sure if it affects
MG> you or not.

MG> Michael.

MG> Markus Wiedmer - FHBB wrote:
>> hi all,
>> We are students of the University of applied Sciences in Basel. As a
>> project we are trying to realize a High-Availability Fileserver on
>> Linux. We want to use GFS for our Storage but we are having problems
>> in making it redundant.
>> We are running 2 Samba-Servers that achieve failover through
>> Heartbeat. Ideally, both servers should access the external storage through
>> GFS. We thought we could use the pool_tool or clvm for this, but AFAIK
>> both don't offer any redundancy, right?
>> Is there any way to make GFS-Nodes (preferably through GNBD)
>> redundant, so that a failure of a single node wouldn't affect the
>> whole storage? Of course we could employ RAID 1 or 5 on the nodes
>> themselves but that wouldn't save us in case the whole node fails.
>> Does anyone have any experience with this. Thanks in advance
>> -markus

Mit freundlichen Grüssen
Markus Wiedmer - FHBB
mailto:markus wiedmer stud fhbb ch

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]