[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[linux-lvm] how to detect and take action on failed extents?



I was thinking about emulating (loosely) the dispersed allocation of extents when creating an LV like 3Par does. But then I wondered, how do I detect when a PV (and thus sections of an LV) goes offline either partially or entirely? Obviously the lower level block level will throw errors, but does that necessarily mean the entire LV is marked bad even though only a small portion may be unavailable?

Assuming I've layered MD Raid on top of multiple of these "dispersed" LVs, is there a way I can inform MD that a section of the raid component is failed and to rebuild just those affected stripes? My understanding of typical MD behavior is that it expects the entire raid component (eg. a single LV) to be marked bad and to rebuild the entire block device after it's been replaced/fixed. This means an enormous amount of "unnecessary" copying and parity calculation which I'd rather avoid.

The only naive solution I've struck upon is to construct lots of MD devices where there is a direct 1:1 map between LV and PV (eg. raid5 devices of say 4-8GB in size) and then further assemble them into a concatenated device.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]