[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS limits?



On Tue, 13 Jul 2004 15:22:51 -0700, Don MacAskill <don smugmug com> wrote:
> 
> Hi there,
> 
> I've been peripherally following GFS's progress for the last two years
> or so, and I'm very interested in using it.  We were already on Red Hat
> when Sistina was acquired, so I've been waiting to see what Red Hat will
> do with it.   But before I get ahold of the sales people, I thought I'd
> find out a little more about it.
> 
> We have two use cases where I can see it being useful:
> 
> - For our web server clusters to share a single "snapshot" of our
> application code amongst themselves.  GFS obviously functions great in
> this environment and would be useful.
> 
> - For our backend image data storage.  We currently have 35TB of
> storage, and it's growing at a rapid rate.  I'd like to be able to scale
> into hundreds of petabytes some day, and would like to select a solution
> early that will scale large.  Migrating a few hundred TBs from one
> solution to another already keeps me up at night...   PBs would make me
> go insane.  This is the use case I'm not sure of with regards to GFS.
> 
> Does GFS somehow get around the 1TB block device issue?  Just how large
> can a single exported filesystem be with GFS?

The code that most people on this list are interested in currently is
the code in cvs which is for 2.6 only. 2.6 has a config option to
enable using devices larger than 2TB. I'm still reading through all
the GFS code, but it's still architecturally the same as when it was
closed source, so I'm pretty sure most of my knowledge from OpenGFS
will still apply. GFS uses 64bit values internally, so you can have
very large filesystems (larger than PBs).

> 
> Our current (homegrown) solution will scale very well for quite some
> time, but eventually we're going to get saturated with write requests to
> individual head units.  Does GFS intelligently "spread the load" among
> multiple storage entities for writing under high load?

No, each node that mounts has direct access to the storage. It writes
just like any other fs, when it can.

> Does it always
> write to any available storage units, or are there thresholds where it
> expands the pool of units it writes to?  (I'm not sure I'm making much
> sense, but we'll see if any of you grok it :)

I think you may have a little misconception about just what GFS is.
You should check the WHATIS_OpenGFS doc at
http://opengfs.sourceforge.net/docs.php It says OpenGFS, but for the
most part, the same stuff applies to GFS.

> 
> In the event of some multiple-catastrophe failure (where some data isn't
> online at all, let alone redundant), how graceful is GFS?  Does it "rope
> off" the data that's not available and still allow full access to the
> data that is?  Or does the whole cluster go down?

That's a good question that I don't know the answer to. But I'd
imagine that it wouldn't be terribly happy. Sorry I don't know more.
Maybe one of the GFS devs will know better.

> 
> I notice the pricing for GFS is $2200.  Is that per seat?  And if so,
> what's a "seat"?  Each client?  Each server with storage participating
> in the cluster?  Both?  Some other distinction?

Now I definitely know you have some misconception. GFS doesn't have
any concept of server and client. All nodes mount the fs directly
since they are all directly connected to the storage.

> 
> Is AS a prereq for clients?  Servers?  Both?  Or will ES and WS boxes be
> able to participate as well?

I'll punt to Red Hat people here.

> 
> Whew, that should be enough to get us started.
> 
> Thanks in advance!
> 
> Don
> 

--Brian Jackson


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]