[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS 2Tb limit

William Lee Irwin III wrote:
On Wed, Sep 01, 2004 at 12:31:41PM +0100, Stephen Willey wrote:
There was a post a while back asking about 2Tb limits and the consensus 
was that with 2.6 you should be able to exceed the 2Tb limit with GFS.  
I've been trying several ways to get GFS working including using 
software raidtabs and LVM (seperately :) ) and everytime I try to use 
mkfs.gfs on a block device larger than 2Tb I get the following:
Command: mkfs.gfs -p lock_dlm -t cluster1:gfs1 -j 8 /dev/md0
Result: mkfs.gfs: can't determine size of /dev/md0: File too large
(/dev/md0 is obviously something different when using LVM or direct 
block device access)
Does anyone have a working GFS filesystem larger than 2Tb (or know how 
to make one)?
Without being able to scale past 2Tb, GFS becomes pretty useless for us...
Thanks for any help,

Either your utility is not opening the file with O_LARGEFILE or an
O_LARGEFILE check has been incorrectly processed by the kernel. Please
strace the utility and include the compressed results as a MIME
attachment. Remember to compress the results, as most MTA's will reject
messages of excessive size, in particular, mine.

-- wli
I'll do that.  Probably won't be today as I'm going to simplify the setup a lot before sending you these results.  At the moment there are just too many elements in the path to the mkfs.  I want to make the setup as basic as possible to narrow down the list of possible problems.

Current Setup:

4 servers each serving 1Tb via GNBD
1 client importing all 4 GNBD devices
That client then creates one virtual device using either CLVM or MD
mkfs gives the File too large error

What I'll Set Up:

1 machine with 2 x 2Tb FC RAIDs direct attached (sda and sdb)
Then I'll try both LVM and MD again to check it's nothing to do with GNBD


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]