[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[linux-lvm] rebuilding raided root volume



Hi Folks,

I've been busily rebuilding from a crash, and running into a sticky problem - my root volume is an LVM2 LV, built on top of a LVM2 PV, which in turn is built on top of a raided array (md - RAID1 configuration). The machine has 4 SATA drives (2 channels, master/slave in each).

It was a funny crash - it first looked like a hardware failure corrupting two drives; turns out it was a single drive that failed in a way that it kept responding, but taking a VERY long time to do so (10s of seconds) - the system kept running, but everything slowed to a crawl. Not sure why the failure wasn't detected but that's a story for another day.

With the drive removed, everything came back up, but all 4 RAID one devices had become degraded and did not automatically rebuild themselves - they were all effectively running as a single drive.

After inserting a spare drive and formatting it, I started doing hot adds (mdadm --add) and 3 of 4 arrays are now working properly

Which brings us to the fourth array... which supports my root volume, configuration is something like this:

before crash:
/
Logical Volume
Physical Volume
RAID1 array - 2 active, one spare

after crash and partial recovery:
/
Logical Volume
Physical Volume
RAID1 array - showing inactive, running on spare drive alone

On the other volumes, when I did a hot add (mdadm --add ...) the added drives started resyncing and now all is fine. On this array, the new drives shows as "spare rebuilding" but it doesn't really seem to be doing anything.

So... my question becomes: if I can't figure out how to get md to rebuild the array "underneath" LVM, how do I unwind all of this, and rebuild things - without becoming unrunable without a root volume?

Thanks for any suggestions anyone can offer.

Miles Fidelman




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]