e2defrag - Unable to allocate buffer for inode priorities

Andreas Dilger adilger at clusterfs.com
Tue Oct 31 17:10:50 UTC 2006


On Oct 13, 2006  14:13 +0200, Magnus M�nsson wrote:
> Today I tried to defrag one of my filesystems. It's a 3.5T large
> filesystem that has 6 software-raids in the bottom and then merged
> together using lvm. I was running ext3 but removed the journal flag with

> Why do I want to defrag? Well, fsck gives this nice info to me:
> /dev/vgraid/data: 227652/475987968 files (41.2% non-contiguous), 847539147/951975936 blocks
> 
> 41% sounds like a lot in my ears and I am having a constant read of files
> on the drives, it's to slow already.

The 41% isn't necessarily bad if the files are very large.  For large
files it is inevitable that there will be fragmentation after 125MB or so.

What is a bigger problem is if the filesystem is constantly very nearly
full, or if your applications are appending a lot (e.g. mailspool).

> So now it was time to defrag, I used this command:
> thor:~# e2defrag -r /dev/vgraid/data

This program is dangerous to use and any attempts to use it should be
stopped.  It hasn't been updated in such a long time that it doesn't
even KNOW that it is dangerous (i.e. it doesn't check the filesystem
version number or feature flags).

What I would suggest in the meantime is to make as much free space in the 
filesystem as you can, find files that are very fragmented (via the
filefrag program) and then copy these files to a new temp file, and rename
it over the old file.  It should help for files that are very fragmented.

There is also a discussion about implementing online defragmentation, but
that is still a ways away.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.




More information about the Ext3-users mailing list