Use tar to append?

Les Mikesell lesmikesell at gmail.com
Fri Mar 9 03:30:45 UTC 2007


Mike McCarty wrote:
> I have a backup script which I run on some sort of regular
> basis. I use tar to create an archive, which I then split
> into pieces of CDROM size (703MB) and write to CDROMs.
> 
> Originally, I wrote it such that it added directories one
> by one to the archive, but found that it took inordinately
> long periods of time. A little investigation showed that
> each time it started up tar, the entire archive got copied,
> another directory got added, and then the archive was renamed.
> 
> Since there are about 15 directories that I back up, that
> meant *lots* of copying.
> 
> So, now I just put them all in there at once. But I'm missing
> the progress indicators I used to get. For example, during
> the "verification phase" I used to have...
> 
> for path in $backupdirs
> do
>     echo "Verifying $path...." | wall
>     tar tzf /tmp/backup.tar.gz 1>/dev/null && \
>         echo "$path: verified" | wall || \
>         echo "$path: errors in verify" | wall
>     if [ $? -eq 0 ]
>         then echo "$path: verified"
>         else echo "$path: error(s) in verify" 1>&2
>     fi
> done
> 
> Is there a way to get tar to use the archive it is adding to
> "in place"? I've read man and info, and I see --append
> (which is what I was using) and --catenate (which looks
> marginally faster, perhaps, since I compress), but see
> no way to make it "just do it" without doing an implicit
> copy.
> 

You can't append to an already gzipped file, so it must be copying the 
previous section by uncompressing from the start and recompressing a new 
copy so it can continue with the compressor in the right state.  Have 
you tried not using -z with tar while creating the archive, then piping 
through gzip and split at the end?

-- 
   Les Mikesell
     lesmikesell at gmail.com




More information about the fedora-list mailing list