[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS create file performance

> -----Original Message-----
> From: linux-cluster-bounces redhat com
[mailto:linux-cluster-bounces redhat com]
> On Behalf Of C. Handel
> Sent: Friday, March 19, 2010 5:43 PM
> To: linux-cluster redhat com
> Subject: Re: [Linux-cluster] GFS create file performance
> Is your session data valuable? what happens if you loose it? For web
> application this normally means, that users need to login again.

It varies.  Our "session" mechanism is used for a variety of purposes,
some very short lived, others that may persist for weeks.

In some cases the loss of this data will force the user to login again,
as you say.  In other examples a link that we send in an email may
become invalid.

We may decide eventually to adopt different storage backends for
short-lived session data, or transient vs. persistent data.

> How big is your data? What is the read/write ratio?

We have a 50GB GFS filesystem right now.  Reads/writes are close to 1:1.

> You could go for a memcache. Try two dedicated machines with lots of
> memory. Write your session storage to always write to both and read
> from one. Handle failure in software. Unbeatable performance. will
> saturate gigbit links with ease.

Yup, we're aware of this and other storage alternatives.  I wanted to
ask about it on the linux-cluster list to make sure we didn't overlook
anything regarding GFS.  I'm also curious to know what the present
limitations of GFS are.

We actually use GFS for several purposes.  One of those is to
synchronize web content--we used to run an elaborate system of rsync
processes to keep all content distributed over all nodes.  We've
replaced the use of rsync with a GFS filesystem (two master nodes, many
spectator nodes).  This is working well.

We also use GFS to distribute certain user-contributed content, such as
images or video.  This is a read-write filesystem mounted on all cluster
nodes.  GFS works well for this too.

Our only controversial use of GFS at the moment is the session data due
to the frequency of create/write/read/unlink that we need to support.
Following Steven Whitehouse's great explanation last week of inode
creation, resource groups and extended attributes, we tried disabling
selinux on certain cluster nodes.  Surprisingly, I've seen a reduction
of block I/O as high as 30-40% resulting from this.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]