[linux-lvm] restore lvm root filesystem

Martijn Brouwer e.a.m.brouwer at alumnus.utwente.nl
Tue Jan 17 22:39:09 UTC 2006


Hi,
A situation: Somebody configured a computer with fedora core. The root
filesystem is on a lvm, that used a software raid 1 array as physical
volume. The system crashed and afterwards, the lvm did not come up
anymore. The administrator used one disk to install an new system (the
other should be identical) but wants to get some data from the other
disk. I am by no means a lvm expert and tried to following to recover
the volume group:

vgscan -v
    Wiping cache of LVM-capable devices
    Wiping internal cache
  Reading all physical volumes.  This may take a while...
    Finding all volume groups
    Finding volume group "VolGroup00"
  Couldn't find device with uuid 'nIS7N6-h9Oo-6WNC-SSCY-LxA0-pJRG-oWDYIt'.
  Couldn't find all physical volumes for volume group VolGroup00.
  Volume group "VolGroup00" not found

This basically says. "Hey, one device is missing"
Then I tried to restore the original configurartion. I copied sda2 (the
more or less unaffected disk), to sdb (the bad disk) and tried to adept
the lvm configuration:

save the config to a file
lvm vgcfgbackup -P

creat a new pv with the missing uuid:

pvcreate -u nIS7N6-h9Oo-6WNC-SSCY-LxA0-pJRG-oWDYIt /dev/sdb2

replaced a "device unknown" line in /etc/lvm/backup/VolGroup00
with /dev/sdb2. Then I tried to restore the lvm using the changed config
file, which failed:

 lvm vgcfgrestore -v -t -f /etc/lvm/backup/VolGroup00 VolGroup00
  Test mode: Metadata will NOT be updated.
  Cannot change metadata for partial volume group VolGroup00
  Restore failed.
    Test mode: Wiping internal 

================================================================

Then I did a second attempt:

I made a zero device:

echo '0 16000000 zero' | dmsetup create zeros

Changed lvm.conf in order to make the zero device a fall back option:

    # missing_stripe_filler = "/dev/ioerror"
    missing_stripe_filler = "/dev/mapper/zeros"

And tried to bring the vg up after I removed the pv that I created above
(sdb2):

lvm pvscan
  Couldn't find device with uuid
'nIS7N6-h9Oo-6WNC-SSCY-LxA0-pJRG-oWDYIt'.
  PV unknown device   VG VolGroup00   lvm2 [74.53 GB / 0    free]
  PV /dev/sda2        VG VolGroup00   lvm2 [74.41 GB / 64.00 MB free]
  Total: 2 [148.94 GB] / in use: 2 [148.94 GB] / in no VG: 0 [0   ]

Now we have only the unaffected partition als pv.

lvchange -a y -P VolGroup00
  Partial mode. Incomplete volume groups will be activated read-only.
  Couldn't find device with uuid
'nIS7N6-h9Oo-6WNC-SSCY-LxA0-pJRG-oWDYIt'.
  Couldn't find device with uuid
'nIS7N6-h9Oo-6WNC-SSCY-LxA0-pJRG-oWDYIt'.
  device-mapper ioctl cmd 9 failed: Invalid argument
  Couldn't load device 'VolGroup00-LogVol00'.
  Couldn't find device with uuid
'nIS7N6-h9Oo-6WNC-SSCY-LxA0-pJRG-oWDYIt'.

This failed again. As I said, I am no expert and cannot solve this. For
these attempts were suggested by a kind soul in the irc channel. I hope
somebody can give me some more suggestions.
These operations I did from a ubuntu live cd with a 2.6.12 kernel and
lvm2. I send the partial backup of the configuration as attachment. 

Bye,

Martijn


-------------- next part --------------
# Generated by LVM2: Tue Jan 17 12:34:18 2006

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'vgcfgbackup -P'"

creation_host = "ubuntu"	# Linux ubuntu 2.6.12-9-386 #1 Mon Oct 10 13:14:36 BST 2005 i686
creation_time = 1137501258	# Tue Jan 17 12:34:18 2006

VolGroup00 {
	id = "1AX2QH-tM3T-lPsp-pt59-o0Rd-6cAl-MhICnT"
	seqno = 3
	status = ["RESIZEABLE", "PARTIAL", "READ"]
	extent_size = 65536		# 32 Megabytes
	max_lv = 0
	max_pv = 0

	physical_volumes {

		pv0 {
			id = "nIS7N6-h9Oo-6WNC-SSCY-LxA0-pJRG-oWDYIt"
			device = "unknown device"	# Hint only

			status = ["ALLOCATABLE"]
			pe_start = 384
			pe_count = 2385	# 74.5312 Gigabytes
		}

		pv1 {
			id = "aUxDE1-AstI-goWu-q8mC-GlN9-RzBl-XH2pgh"
			device = "/dev/sda2"	# Hint only

			status = ["ALLOCATABLE"]
			pe_start = 384
			pe_count = 2381	# 74.4062 Gigabytes
		}
	}

	logical_volumes {

		LogVol00 {
			id = "gAOFuC-QMYe-Lyi8-RlCH-4uBL-T1TX-OEnayn"
			status = ["READ", "WRITE", "VISIBLE"]
			segment_count = 2

			segment1 {
				start_extent = 0
				extent_count = 2385	# 74.5312 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 0
				]
			}
			segment2 {
				start_extent = 2385
				extent_count = 2347	# 73.3438 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv1", 0
				]
			}
		}

		LogVol01 {
			id = "oj9dm4-8DTK-GQgf-Omdy-kcUS-yp4V-pfyxmT"
			status = ["READ", "WRITE", "VISIBLE"]
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 32	# 1024 Megabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv1", 2347
				]
			}
		}
	}
}


More information about the linux-lvm mailing list