[Linux-cluster] Unformatting a GFS cluster disk

christopher barry Christopher.Barry at qlogic.com
Fri Mar 28 19:51:54 UTC 2008


On Fri, 2008-03-28 at 07:42 -0700, Lombard, David N wrote:
> On Thu, Mar 27, 2008 at 03:26:55PM -0400, christopher barry wrote:
> > On Wed, 2008-03-26 at 13:58 -0700, Lombard, David N wrote:
> > ...
> > > > Can you point me at any docs that describe how best to implement snaps
> > > > against a gfs lun?
> > > 
> > > FYI, the NetApp "snapshot" capability is a result of their "WAFL" filesystem
> > > <http://www.google.com/search?q=netapp+wafl>.  Basically, they use a
> > > copy-on-write mechanism that naturally maintains older versions of disk blocks.
> > > 
> > > A fun feature is that the multiple snapshots of a file have the identical
> > > inode value
> > > 
> > 
> > fun as in 'May you live to see interesting times' kinda fun? Or really
> > fun?
> 
> The former.  POSIX says that two files with the identical st_dev and
> st_ino must be the *identical* file, e.g., hard links.  On a snapshot,
> they could be two *versions* of a file with completely different
> contents.  Google suggests that this contradiction also exists
> elsewhere, such as with the virtual FS provided by ClearCase's VOB.
> 

So, I'm trying to understand what to takeaway from this thread:
* I should not use them?
* I can use them, but having multiple snapshots introduces a risk that a
snap-restore could wipe files completely by potentially putting a
deleted file on top of a new file?
* I should use them - but not use multiples.
* something completely different ;)

Our primary goal here is to use snapshots to enable us to backup to tape
from the snapshot over FC - and not have to pull a massive amount of
data over GbE nfs through our NAT director from one of our cluster nodes
to put it on tape. We have thought about a dedicated GbE backup network,
but would rather use the 4Gb FC fabric we've got.

If anyone can recommend a better way to accomplish that, I would love to
hear about how other people are backing up large-ish (1TB) GFS
filesystems to tape.

Regards,
-C




More information about the Linux-cluster mailing list