[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[linux-lvm] LVM2: problem accessing volume after raidreconf to add hd to raid5


Hopefully someone can shed some light on what feels like a hopeless
situation.  I've googled and tried a few things and got to a certain point,
but fear increasing the risk of losing data.

OK, first the original logical volume setup on raid5 which worked fine.
Here's the setup I followed:

mdadm --create /dev/md0 --level=5 --force --raid-devices=3 \
     /dev/hdc1 /dev/sda1 /dev/sdb1
pvcreate /dev/md0
vgcreate lvm-raid /dev/md0
vgdisplay lvm-raid  (and copy Free PE / Size for use in next command)
lvcreate -l 23845 lvm-raid -n lvm0
mkfs -t ext3 /dev/lvm-raid/lvm0
...and mount.

Everything worked perfectly, etc.  Weeks later (present miserable week),
we then needed to add a disk to this RAID5 array.

So, unmounted the volume, and:
mdadm -S /dev/md0
vgchange -an lvm-raid

then created new raidtab file with extra device, then:
raidreconf -o /etc/raidtab -n /etc/raidtabnew -m /dev/md0
...reconf takes several days, but is successful...
...raid auto-rebuild currently running..

[this is where the problems start]

vgcfgrestore lvm-raid - errors with:
Couldn't find device with uuid 's7XzWQ-3J9O-yaYj-anJG-hhMn-mloL-mPkl1F'.
  Couldn't find all physical volumes for volume group lvm-raid.
  Restore failed.

To get uuid (this is possibly where I'm losing the plot due to ignorance):
pvdisplay  /dev/md0
  --- NEW Physical volume ---
  PV Name               /dev/md0
  VG Name
  PV Size               1.09 TB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               6LGvgt-fLSV-xyfL-ItNc-YIiN-9Dhl-mYefsA

pvcreate -u s7XzWQ-3J9O-yaYj-anJG-hhMn-mloL-mPkl1F /dev/md0
  Physical volume "/dev/md0" successfully created

vgcfgrestore lvm-raid
  Restored volume group lvm-raid

  --- Logical volume ---
  LV Name                /dev/lvm-raid/lvm0
  VG Name                lvm-raid
  LV UUID                000000-0000-0000-0000-0000-0000-000000
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                745.16 GB  (this is supposed to be 1TB+)
  Current LE             23845
  Segments               1
  Allocation             normal
  Read ahead sectors     1024

vgchange -ay
  1 logical volume(s) in volume group "lvm-raid" now active

vgdisplay lvm-raid
  --- Volume group ---
  VG Name               lvm-raid
  System ID             baksrv01138783727
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                256
  Cur LV                1
  Open LV               0
  Max PV                256
  Cur PV                1
  Act PV                1
  VG Size               745.16 GB
  PE Size               32.00 MB
  Total PE              23845
  Alloc PE / Size       23845 / 745.16 GB
  Free  PE / Size       0 / 0
  VG UUID               6JaxpJ-MmTj-Ve9X-WiT6-THcg-SOjV-9uO8Mv


Before proceeding to try and lvextend to use the extra 400GB (of new dev),
I decided to check if things were still happy, and tried to mount:

mount -a

VFS: Can't find ext3 filesystem on dev dm-0.

It's clear from above that my knowledge of lvm/2 is dangerous.
I have a gut feeling that either:
a) I've screwed the data, and/or,
b) I'm missing something glaringly obvious due to acute ignorance.

Any help would be appreciated.


This message was sent using MetroWEB's AirMail service.
http://www.metroweb.co.za/ - full access for only R73.
Free Web Accelerator, WebMail, Calendar, Anti-Virus,
Anti-Spam, 10 emails, 100MB personal webspace, and more!
Phone Now!  086 11 11 440

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]