[Linux-cluster] GFS2 performance on large files

Steven Whitehouse swhiteho at redhat.com
Thu Apr 23 08:18:34 UTC 2009


Hi,

On Thu, 2009-04-23 at 00:11 +0100, Andy Wallace wrote:
> Hi,
> 
> I've just set up a GFS2 filesystem for a client, but have some serious
> issues with the performance on large files (i.e. > 2GB). This is a real
> problem, as the files we're going to be using are going to be approx.
> 20GB up to 170GB.
> 

If you are talking about the write side of things, then yes, we know
there is an issue which is related to the "page at a time" architecture
of the Linux VFS helpers. It is not easy to work around this because its
an area thats rather prone to deadlocks. We do know there is an issue
though with large streaming writes and we will look at solutions as soon
as we can.

On the read side though, I would expect performance to be pretty good,
so if you are having trouble there, then that is something that we
should look into.

You don't mention which kernel version you are using. Thats always
helpful to know in diagnosing issues like this,

Steve.


> Hardware setup is:
> 2 x IBM X3650 servers with 2 x Dual Xeon, 4GB RAM, 2 x 2GB/s HBAs per
> server;
> Storage on IBM DS4700 - 48 x 1TB SATA disks
> 
> Files will be written to the storage via FTP, read via NFS mounts, both
> on an LVS virtual IP address.
> 
> Although it's not as quick as I'd like, I'm getting about 150MB/s on
> average when reading/writing files in the 100MB - 1GB range. However, if
> I try to write a 10GB file, this goes down to about 50MB/s. That's just
> doing dd to the mounted gfs2 on an individual node. If I do a get from
> an ftp client, I'm seeing half that; cp from an NFS mount is more like
> 1/5.
> 
> I've spent a lot of time reading up on GFS2 performance, but I haven't
> found anything useful for improving throughput with large files. Has
> anyone got any suggestions or managed to solve a similar problem?
> 




More information about the Linux-cluster mailing list