How to prevent and recover from accidental file deletion in Linux
You've gotten the lecture before, or else you've given it to someone else: The best way to recover lost files is to back them up in the first place. Unfortunately, accidents happen. People delete files they didn't mean to delete, and they plead with their friendly system administrator to recover everything. Consider this overview of practices, programs, and techniques to cut down on file recovery drama, and improve your disaster response.
You knew this would come first. Data recovery is a time-intensive process and rarely produces 100% correct results. If you don't have a backup plan in place, start one now.
Better yet, implement two. First, provide users with local backups with a tool like rsnapshot. This utility creates snapshots of each user's data in a
~/.snapshots directory, making it trivial for them to recover their own data quickly.
There are a great many other open source backup applications that permit your users to manage their own backup schedules.
Second, while these local backups are convenient, also set up a remote backup plan for your organization. Tools like AMANDA or BackupPC are solid choices for this task. You can run them as a daemon so that backups happen automatically.
Backup planning and preparation pay for themselves in both time, and peace of mind. There's nothing like not needing emergency response procedures in the first place.
On modern operating systems, there is a Trash or Bin folder where users drag the files they don't want out of sight without deleting them just yet. Traditionally, the Linux terminal has no such holding area, so many terminal power users have the bad habit of permanently deleting data they believe they no longer need. Since there is no "undelete" command, this habit can be quite problematic should a power user (or administrator) accidentally delete a directory full of important data.
Many users say they favor the absolute deletion of files, claiming that they prefer their computers to do exactly what they tell them to do. Few of those users, though, forego their
rm command for the more complete
shred, which really removes their data. In other words, most terminal users invoke the
rm command because it removes data, but take comfort in knowing that file recovery tools exist as a hacker's un-
rm. Still, using those tools take up their administrator's precious time. Don't let your users—or yourself—fall prey to this breach of logic.
If you really want to remove data, then
rm is not sufficient. Use the
shred -u command instead, which overwrites, and then thoroughly deletes the specified data
However, if you don't want to actually remove data, don't use
rm. This command is not feature-complete, in that it has no undo feature, but has the capacity to be undone. Instead, use trashy or trash-cli to "delete" files into a trash bin while using your terminal, like so:
$ trash ~/example.txt
$ trash --list
One advantage of these commands is that the trash bin they use is the same your desktop's trash bin. With them, you can recover your trashed files by opening either your desktop Trash folder, or through the terminal.
If you've already developed a bad
rm habit and find the
trash command difficult to remember, create an alias for yourself:
$ echo "alias rm='trash'"
Even better, create this alias for everyone. Your time as a system administrator is too valuable to spend hours struggling with file recovery tools just because someone mis-typed an
Unfortunately, it can't be helped. At some point, you'll have to recover lost files, or worse. Let's take a look at emergency response best practices to make the job easier. Before you even start, understanding what caused the data to be lost in the first place can save you a lot of time:
- If someone was careless with their trash bin habits or messed up dangerous remove or
shredcommands, then you need to recover a deleted file.
- If someone accidentally overwrote a partition table, then the files aren't really lost. The drive layout is.
- In the case of a dying hard drive, recovering data is secondary to the race against decay to recover the bits themselves (you can worry about carving those bits into intelligible files later).
No matter how the problem began, start your rescue mission with a few best practices:
- Stop using the drive that contains the lost data, no matter what the reason. The more you do on this drive, the more you risk overwriting the data you're trying to rescue. Halt and power down the victim computer, and then either reboot using a thumb drive, or extract the damaged hard drive and attach it to your rescue machine.
- Do not use the victim hard drive as the recovery location. Place rescued data on a spare volume that you're sure is working. Don't copy it back to the victim drive until it's been confirmed that the data has been sufficiently recovered.
- If you think the drive is dying, your first priority after powering it down is to obtain a duplicate image, using a tool like
Once you have a sense of what went wrong, It's time to choose the right tool to fix the problem. Two such tools are Scalpel and TestDisk, both of which operate just as well on a disk image as on a physical drive.
Practice (or, go break stuff)
At some point in your career, you'll have to recover data. The smart practices discussed above can minimize how often this happens, but there's no avoiding this problem. Don't wait until disaster strikes to get familiar with data recovery tools. After you set up your local and remote backups, implement command-line trash bins, and limit the
rm command, it's time to practice your data recovery techniques.
Download and practice using Scalpel, TestDisk, or whatever other tools you feel might be useful. Be sure to practice data recovery safely, though. Find an old computer, install Linux onto it, and then generate, destroy, and recover. If nothing else, doing so teaches you to respect data structures, filesystems, and a good backup plan. And when the time comes and you have to put those skills to real use, you'll appreciate knowing what to do.