[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] du vs df vs gfs2_tool df /mountpoint



Hi all.

I have a GFS2 cluster of three machines using an iSCSI disk.
Everything went fine on the initial tests and the cluster seems to work just
great.
A couple of days ago I submitted this cluster to a number of file creation
operations and whilst this was providing enough load to see some performance
data it was also inadvertently deleting some of the files being created for
the purpose of the test whilst these files were still being written to.

I though this shouldn't be a problem and that a simple fsck would be able to
recover the space lost on those inodes.
But that doesn't seem the case and I'll show why.

DF output:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/gfsvg-gfslv
                      200G  200G  839M 100% /gfs

du -sh output:
[root vmcluster1 gfs]# du -sh
101G    .
 
gfs2_tool output:
[root vmcluster1 gfs]# gfs2_tool df /gfs
/gfs:
  SB lock proto = "lock_dlm"
  SB lock table = "iscsicluster:hd"
  SB ondisk format = 1801
  SB multihost format = 1900
  Block size = 4096
  Journals = 3
  Resource Groups = 800
  Mounted lock proto = "lock_dlm"
  Mounted lock table = "iscsicluster:hd"
  Mounted host data = "">  Journal number = 0
  Lock module flags = 0
  Local flocks = FALSE
  Local caching = FALSE

  Type           Total          Used           Free           use%
  ------------------------------------------------------------------------
  data           52423184       52208404       214780         100%
  inodes         215085         305            214780         0%

 

As you can see by du's output there's only 101G being used and df's
reporting the FS to be 100% used.
I've run fsck.gf2 several times on one box on all the boxes but it just
doesn't seem to be fixing this right.
It reminds me of a post I saw
https://bugzilla.redhat.com/show_bug.cgi?id=325151 .

When I ran the first fsck I did saw a lot of issues being fixed and I could
almost swear that the first time I mounted the cluster FS after repairing it
was reporting the right size again.
The next time I mounted it it all reverted back to what I show above.

I'm using

[root vmcluster1 gfs]# gfs2_tool version
gfs2_tool 0.1.44 (built Jul  6 2008 10:57:30)
Copyright (C) Red Hat, Inc.  2004-2006  All rights reserved.

Anyone with a similar issue ?!

Cheers,


   PECastro


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]