[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] lvreduce nightmare



Long ago, Nostradamus foresaw that on May 18, tariq wali would write:

Appreciate the response although I realized a messed up large volume  lvm
requires time and which I didn't have in this case and resorted to backup
and recreated volumes all over new . 

You get lots of bonus points for actually having that backup!  Too many
people posting here in panic have no such option...

I realize I went wrong with resize2fs thinking that it would reduce my lvm
by 100G but it actually reduced the total volume size to 100G , i-e an lvm
of 1.7T

resize2fs /dev/vg0/data 100G ( i thought it would reduce the by 100G but
dropped the block device size to total of 100G )

so to i guess to do this right i should have 

resize2fs /dev/vg0/data 1.6T or (1600G)

and then lvreduce -n data -L 100G /dev/vg0/data ( to reduce the lvm by 100
) 

I'm pretty sure resize2fs would have complained about the impossible
task you set it, and exited with an error code before doing anything.
The *real* problem was then reducing the LV with the (unresized) fs.
The fsadm script used to check for and prevent this kind of error.

I even tried the vgcfgrestore on the archived lvm metadata file but that
just restored the metadata (back to original volume size however I still had
a bad file system ) .

Even after reducing the LV, perfect recovery was still possible (before
allocating/extending any other LVs) by vgcfgrestore.  The point of
no return was nearly reached when you then ran e2fsck on the truncated fs.
You could still have escaped unscathed if you hadn't answered 'yes'
to its offer to "fix" the superblocks....

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]