[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: [Linux-cluster] GFS

It is,  but no filesystem is responsible for managing how an application
reads and writes to files. The application mut be aware of the possebility
of another instance, on another machine writing to the files and coordinating
reads and writes, cluster-wide among application instances. 


-----Original Message-----
From: linux-cluster-bounces redhat com
[mailto:linux-cluster-bounces redhat com] On Behalf Of Mohamed Magdi Abbas
Sent: Thursday, October 28, 2004 12:11 PM
To: linux clistering
Subject: Re: [Linux-cluster] GFS

David Teigland wrote:
> On Tue, Oct 26, 2004 at 04:41:29PM -0400, Dascalu Dragos wrote:
>>We are working on a similar scenario but adding mailman into the mix. 
>>The ideal outcome would be for multiple mailman/postfix servers to 
>>write archives, etc to the same centralized location on a SAN. After 
>>doing some tests this setup does not appear to be trivial. We ran into 
>>a similar problem when using NFS; if multiple machines write to the 
>>same file at the same time the file gets mangled as the machines cut 
>>each other off. With GFS we noticed that each machine has a 4k buffer 
>>window in which it writes its data. If a second process decides to 
>>start writing to the same file we noticed alternating writes to the 
>>file after 4k of data.
> Note that this sounds like perfectly correct behavior on the part of gfs.
> The application is responsible for the necessary file locking, of 
> course, while gfs is responsible for keeping the fs uncorrupted.

I thought the idea of GFS is that it would handle locking to enable shared
filesystems among different nodes with simultaneous r/w access to the


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]