[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS limits?

On Tue, Jul 13, 2004 at 03:22:51PM -0700, Don MacAskill wrote:
> Does GFS somehow get around the 1TB block device issue?  Just how large 
> can a single exported filesystem be with GFS?

On Linux 2.4-based kernels, the limit is 1TB.  On 2.6-based kernels, the
limit is 8TB on 32-bit systems and some really large number (at least
exabytes) on 64-bit systems.

> Our current (homegrown) solution will scale very well for quite some 
> time, but eventually we're going to get saturated with write requests to 
> individual head units.  Does GFS intelligently "spread the load" among 
> multiple storage entities for writing under high load?  Does it always 
> write to any available storage units, or are there thresholds where it 
> expands the pool of units it writes to?  (I'm not sure I'm making much 
> sense, but we'll see if any of you grok it :)

Our current allocation methods try to allocate from areas of the disk
where there isn't much contention for the allocation bitmap locks.  It
doesn't know anything about spreading load on the basis of disk load.
(That would be an interesting thing to add, but we don't have any plans
to do so for the short term.)

> In the event of some multiple-catastrophe failure (where some data isn't 
> online at all, let alone redundant), how graceful is GFS?  Does it "rope 
> off" the data that's not available and still allow full access to the 
> data that is?  Or does the whole cluster go down?

Right now, a malfunctioning or non-present disk can cause the whole
cluster to go down.  That's assuming the error isn't masked by hardware
RAID or CLVM mirroing (when we get there).

One of the next projects on my plate is fixing the filesystem so that a
node will gracefully withdraw itself from the cluster when it sees a
malfunctioning storage device.  Each node will stay up and could
potentially be able to continue accessing other GFS filesystems on
other storage devices.

I/We haven't thought much about trying to get GFS to continue to function
when only part of a filesystem is present.

> I notice the pricing for GFS is $2200.  Is that per seat?  And if so, 
> what's a "seat"?  Each client?  Each server with storage participating 
> in the cluster?  Both?  Some other distinction?

I'm not a marketing/sales person, just a code monkey, so take this with
a grain of salt:  It's per node running the filesystem.  I don't think
machines running GULM lock servers or GNBD block servers count as machine
that need to be paid for.

> Is AS a prereq for clients?  Servers?  Both?  Or will ES and WS boxes be 
> able to participate as well?

According to the web page, you should be able to add a GFS entitlement to
all RHEL trimlines (WS, ES, and AS).


Ken Preslan <kpreslan redhat com>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]