[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Optimizing dd images of ext3 partitions: Only copy blocks in use by fs



Hello,

for bare-metal recovery I need to create complete disk images of ext3 partitions of about 30 servers. I'm doing this by creating lvm2-snapshots and then dd'ing the snapshot-device to my backup media. (I am aware that backups created by this procedure are the equivalent of hitting the power switch at the time the snapshot was taken.)

This works great and avoids a lot of seeks on highly utilized file systems. However it wastes a lot of space for disks with nearly empty filesystems.

It would be a lot better if I could only read the blocks from raw disk that are really in use by ext3 (the rest could be sparse in the imagefile created). Is there a way to do this?

I am aware that e2image -r dumps all metadata. Is there a tool that does not only dump metadata but also the data blocks? (maybe even in a way that avoids seeks by compiling a list of blocks first and then reading them in disk-order) If not: Is there a tool I can extend to do so / can you point me into the righ direction?

(I tried dumpfs, however it dumps inodes on a per-directory base. Skimming through the source I did not see any optimization regarding seeks. So on highly populated filesystems dumpfs still is slower than full images with dd for me.)

Thanks a lot,
Martin




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]