[Linux-cluster] Backup of a GFS2 volume

Steven Whitehouse swhiteho at redhat.com
Tue Aug 30 11:09:23 UTC 2011


Hi,

On Tue, 2011-08-30 at 12:51 +0200, Davide Brunato wrote:
> Hello,
> 
> I've a Red Hat 5.7 2-node cluster for electronic mail services where the mailboxes (maildir format)
> are stored on GFS2 volume. The volume contains about 7500000 files for ~740 GB of disk space
> occupation. Previously the mailboxes were on a GFS1 volume, and I migrated to GFS2 when we changed
> the SAN storage system.
> 
> Due to incremental backups that have become extremely slow (about 41-42H) after the migration from
> GFS to GFS2, I checked the configuration/tuning of the cluster and volume mount options, with the
> help of Red Hat support, but the optimizations (<gfs_controld plock_rate_limit="0"/>, mount with
> noatime and nodiratime) don't have significantly accelerated the incremental backups.
> 
You don't mention how fast the backups were before...

The issue is most likely just that GFS2 caches more data (on average)
than GFS does. If you access that data from the node where that data is
cached, then its faster, if you try to access that same data from
another node then it will be slower.

The issue therefore is ensuring that you divide your backup amoung nodes
in such a way as the backup will mostly be working only with the working
set of files on that node.

Either that, or as you've mentioned below, use your array's snapshot
capability to avoid this issue.


> So I tried another backup strategy, using the snaphot feature of our SAN storage system, doing
> backups outside the cluster environment. I use the snapshots of the GSF2 on another server (also
> with RHEL 5.7) mounting the volume as a local (not clustered) filesystem:
> 
> /var/mailboxes type gfs2 (rw,noatime,nodiratime,lockproto=lock_nolock,localflocks,localcaching)
> 
> The duration of full backups are slightly better (from 24-25H to 21-22H of duration) and the
> incremental backup are "acceptable" (about 9H). But the speed is still low in comparison to backups
> of Ext3 filesystems, particularly for incremental backups.
> 
It is bound to be a bit slower, ext3 can make some optimisations which
are just not possible in a clustered environment. On the other hand, if
it is taking that length of time to snapshot the GFS2 volume on the
array, then that seems to be to be an issue with the array rather than
the filesystem.

> I've notice that the glocks are still used, also when I mount a snapshot of the mailbox GFS2
> partition as a local filesystem:
> 
The glocks are pretty low overhead, when clustering is not involved.
> 
> # mount -t gfs2 /dev/mapper/posta_mbox_disk_vg-posta_mbox_disk_lvol1 /var/mailboxes -o
> lockproto=lock_nolock,noatime,nodiratime
> # time cp -Rp /var/mailboxes/prova* /var/tmp/test/
> 
> real	2m5.648s
> user	0m0.311s
> sys	0m13.243s
> # rm -Rf /var/tmp/test/*
> # time cp -Rp /var/mailboxes/prova* /var/tmp/test/
> 
> real	0m10.946s
> user	0m0.254s
> sys	0m10.634s
> 
This is a nice demonstration of the effects of accessing cached vs.
uncached data.

> # cat /proc/slabinfo | grep gloc
> gfs2_glock         35056  35064    424    9    1 : tunables   54   27    8 : slabdata   3896   3896
>      0
> 
That is a pretty small number of glocks.

> Is there a way to exclude the use of the glocks, or them are necessary to access the partition, even
> if mounted as local filesystem?
> 
> Thanks
> 
> Davide Brunato
> 
Yes, they are required, but the overhead is pretty small, so I doubt
that is the real issue here,

Steve.





More information about the Linux-cluster mailing list