[Linux-cluster] Synchronized filesystem

Kovacs, Corey J. cjk at techma.com
Thu Dec 29 17:37:23 UTC 2005


FWIW...

AFS is designed to handle WAN speed connections. It does this by not xferring
entire
files, but rather smaller chunks (64K if iirc). Reads are cached locally,
while writes
are synced upon "close()" calls. So if you have two people using the same
file, they'll
each write there own version when finished. Last one to write wins. AFS
benefits come 
from security (kerberos) and file distribution (local cacheing) There are no
provisions
for automatic failover (there is replication though)that I recall (you can
manually move things around if you need to take down an AFS server from the
pool). Think of AFS like LVM for file-servers with some added features.
Things have changed a bit from when I last used it so take this with a grain
of salt.


Regards,

Corey

-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Bill Rugolsky Jr.
Sent: Thursday, December 29, 2005 11:56 AM
To: Jean-Eric
Cc: linux clustering
Subject: Re: [Linux-cluster] Synchronized filesystem

On Thu, Dec 29, 2005 at 03:05:17PM +0100, Jean-Eric wrote:
> Yes, I want local caching but also transparent writes. If I'm in the 
> UK I want to write FileServerInTheUk (ans that goes auto to 
> FileServerInTheUS which holds the storage) and if I'm in the US, I 
> want to write to US server.
> Is this case handled by AFS?

Disclaimer: I'm no expert on AFS, it just seemed more plausible than GFS for
your application.  Ask on their mail lists if you have questions.

What you do depends on whether the files that you are caching are different
(and well separated) from the files that you are writing, and whether there
is a single writer or multiple writers.  As a practical matter, the size of
the files can't be ignored either; 2Mbps isn't much bandwidth if many users
are writing bloated MS Office documents ...

AFS volume replication is designed for mostly static data, e.g., /usr
partitions.  They can be updated, but administratively, not in real-time.

There is also the small matter of (Windows) programs that open files for
update even though they may not write to them.  Depending on the distributed
filesystem in use, opening for write may immediately invalidate or bypass
caches.  AFS, IIRC, has seperate traversal paths for read-only access and
read/write access, due to the replication support.

> The last case is that I need to mount the partition with NFS *and* 
> Samba (we are a mixed Windows/UNIX shop) on Linux and Windows hosts 
> (access through AFS only is out of the question...) so I'm not sure 
> that AFS will do it... Or am I wrong?

It sounds like you want the file server on the UK side configured as an
AFS/NFSv4/whatever client and re-exporting the mount via NFS/Samba.
In that case, AFS server volume replication is not the right thing; you
instead want persistent client-side write-through caching for AFS or NFSv4
(or CIFS).  I believe that the infrastructure for that is in David Howells's
and Steve Dickson's (as yet unmerged) fscache patches, but I have no idea
whether it is production ready (and available in an Enterprise Linux distro)
or how well it works with caches in the range of tens or hundreds of
gigabytes.  [Perhaps not well, but it couldn't be much worse than trying to
pull the file over the WAN.]  In order to populate the cache on the UK side
you'd probably have to set up a fscache client and use tar or similar to
populate the fscache partition, then overnight it to the UK office.  That
works for the initial cache load, but ongoing maintenance would be a hassle
and might require specialized tools.

There are commercial products in this niche, but I know nothing about them.

Regards,

	Bill Rugolsky

--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list