[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Rsync --link-dest and ext3: can I increase the number of inodes?



On Mon, Sep 22, 2008 at 02:12:57PM +1000, Cameron Simpson wrote:
> On 21Sep2008 22:27, Theodore Tso <tytso mit edu> wrote:
> | On Sun, Sep 21, 2008 at 08:44:57PM -0400, Richard Michael wrote:
> | > (I run rsync --link-dest backups onto ext3 and am anticipating running
> | > out of inodes.) [...]
> 
> Hmm. While I take the point that each link tree consumes inodes for the
> directories, in a tree that changes little the use of new inodes for
> new/changed files should be quite slow.

There are two problems.  The first is that the number of inodes you
can consume with directories will go increase with each incremental
backup.  If you don't eventually delete some of your older backups,
then you will eventually run out of inodes.  There's no getting around
that.

The second problem is that each inode which has multiple inode takes
up a small amount of memory per inode.  If you are backing up a very
large number of files, this number may consume more address space than
you have on a 32-bit system.  I have a workaround that uses tdb, but
it is quite slow.  (I have another idea that might be faster, but I'll
have to try it too see how well or poorly it works.)

> But a database is... more complicated and then requires special db-aware
> tools for a real recover. The hard link thing is very simple and very
> direct. It has its drawbacks (chmod/chown history being the main one
> that comes to my mind) but for many scenarios it works quite well.

Sure, but the solution may not scale so well for folks who are backing
up 50+ machines and backing up all of /usr, including all of the
distribution maintained files, or for folks who never delete any of
their past incremental backups.  

> For Richard's benefit, I can report that I've used the hard link backup
> tree approach extensively on ext3 filesystems made with default mke2fs
> options (i.e. no special inode count size) and have never run out of
> inodes. Have you actually done some figuring to decide that running out
> of inodes is probable?

Sure, but how many machines are you backing up this way, and how many
days of backups are you keeping?  And have you ever tried running
"e2fsck -nftt /dev/hdXX" (you can do this on a live system if you
want; the -n means you won't write anything to disk, and the goal is
to see how much memory e2fsck needs) to make sure you can fix the
filesystem if you need it?

					- Ted


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]