[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] HDD Failure



On Mon, 2006-09-18 at 20:34 +0100, Nick wrote:
> Hi Fabien,
> 
> Yes, just one LV - "Vol1-share". 
> 
> Does this mean I've lost *everything*? I would have though I should be
> still be able to access everything on the two working disks? 
> 
> I don't have a backup of this data.
I'm sure you will now have at least one backup of all your data.
> 
> Thanks, Nick
> 
> root nibiru:~# pvdisplay
>   --- Physical volume ---
>   PV Name               /dev/hda4
>   VG Name               Vol1
>   PV Size               106.79 GB / not usable 0
>   Allocatable           yes
>   PE Size (KByte)       4096
>   Total PE              27339
>   Free PE               27339
>   Allocated PE          0
>   PV UUID               Cq9xKF-W33m-BCLt-YyIc-EEfm-Btqc-eZLHNh
> 
>   --- Physical volume ---
>   PV Name               /dev/hdb1
>   VG Name               Vol1
>   PV Size               111.75 GB / not usable 0
>   Allocatable           yes
>   PE Size (KByte)       4096
>   Total PE              28609
>   Free PE               28609
>   Allocated PE          0
>   PV UUID               hwQrhH-iXHO-Bots-6zUQ-w8JG-Nmb3-shqZiX
> 
> root nibiru:~# vgdisplay
>   --- Volume group ---
>   VG Name               Vol1
>   System ID
>   Format                lvm2
>   Metadata Areas        2
>   Metadata Sequence No  3
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                0
>   Open LV               0
>   Max PV                0
>   Cur PV                2
>   Act PV                2
>   VG Size               218.55 GB
>   PE Size               4.00 MB
>   Total PE              55948
>   Alloc PE / Size       0 / 0
>   Free  PE / Size       55948 / 218.55 GB
>   VG UUID               RORj4f-LAOJ-83YS-34lD-4YRM-FKP8-8hgXLg
As you can see, you have no remaining LV in this VG. And both of your PV
are empty (Total == Free).

Now if you read vgreduce manual page :
       --removemissing
              Removes  all  missing physical volumes from the volume
group and makes the volume group consistent again.
              It's a good idea to run this option with --test  first  to
find out what it would remove before running it for real.
              Any  logical volumes and dependent snapshots that were
partly on the missing disks get removed completely.  This  includes
those parts that lie on disks that are still present.
              If your logical volumes spanned several disks including
the ones that are lost, you might want to try to salvage  data  first
by activating  your  logical volumes with --partial as described in
lvm (8).

you can see that you've just lost all of your data.
-- 
Fabien Jakimowicz <fabien jakimowicz com>

Attachment: signature.asc
Description: This is a digitally signed message part


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]