[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS 2Tb limit

William Lee Irwin III wrote:
On Wed, Sep 01, 2004 at 12:31:41PM +0100, Stephen Willey wrote:

There was a post a while back asking about 2Tb limits and the consensus was that with 2.6 you should be able to exceed the 2Tb limit with GFS. I've been trying several ways to get GFS working including using software raidtabs and LVM (seperately :) ) and everytime I try to use mkfs.gfs on a block device larger than 2Tb I get the following:
Command: mkfs.gfs -p lock_dlm -t cluster1:gfs1 -j 8 /dev/md0
Result: mkfs.gfs: can't determine size of /dev/md0: File too large
(/dev/md0 is obviously something different when using LVM or direct block device access)
Does anyone have a working GFS filesystem larger than 2Tb (or know how to make one)?
Without being able to scale past 2Tb, GFS becomes pretty useless for us...
Thanks for any help,

Either your utility is not opening the file with O_LARGEFILE or an
O_LARGEFILE check has been incorrectly processed by the kernel. Please
strace the utility and include the compressed results as a MIME
attachment. Remember to compress the results, as most MTA's will reject
messages of excessive size, in particular, mine.

-- wli

MD'd two 1.8Tb RAIDs together to form a 3.8Tb /dev/md0
mkfs.jfs /dev/md0 was successful
mkfs.gfs -p lock_dlm -t cluster1:gfs1 -j 8 /dev/md0 failed as expected with a File too large error

The strace output should be attached...



Attachment: straceresults.gz
Description: GNU Zip compressed data

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]