[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[linux-lvm] FW: Problem while migrating form environment SD-LVM to SD-MD-LVM



Hi,

Please reply to below mail.We need this information urgently.Its very
critical for us.

Thanks Regards
Sandhya

-----Original Message-----
From: Santosh Rokade [mailto:santosh rokade patni com]
Sent: Tuesday, January 04, 2005 7:39 PM
To: linux-lvm redhat com
Cc: sandhya suman patni com; martin george patni com
Subject: Problem while migrating form environment SD-LVM to SD-MD-LVM


Hi,

Problem Description:
====================
I am trying to migrate from environment having lvm on top of SD
to environment having lvm on top of MD and MD on top of SD device.
The migration is successful however output of pvscan and vgscan
shows that particular vg is still in inactive and exported state

Please refer attached diagram for migration details.

The migration steps that i am following are as follows for detail please
refer console logs given below :
1. Unmount logical volume
2. Deactivate volume group
3. Export volume group by command vgexport
4. Edit /etc/raidtab to have entry for MD device on top of cooresponding SD
5. Create md device using mkraid command
6. Run vgscan command
7. Vgimport on md device
8. Mount logical volume

Here pvscan and vgscan output shows vg in inactive and exported state.

However if i run following commands after step #8 the pvscan and vgscan
output is proper

9. Unmount logical volume
10. Deactivate vg
11. raidstop all md devices beneath vg
12. raidstart all md devices beneath vg
13. Activate vg

Inputs Expected:
================
1. Please let us know why pvscan and vgscan shows vg in exported and in
inactive state even though it is imported and accessible through logical
volume.
(vgdisplay -v output is proper)

2. After migration (Steps #1 to #8) if i try to run lvscan, vgchange
command it gives output as below
linux:~ # lvscan
lvscan -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please run
vgscan

3. Please let us know whether the steps mentioned above are correct or not.

4. Why execution of steps #9 to #13 after step #8 solving problem.


**Please refer console logs for details of steps carried out: *****

===========================================================
linux:~ # fdisk -l /dev/sdd

Disk /dev/sdd: 255 heads, 63 sectors, 652 cylinders
Units = cylinders of 16065 * 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdd1             1       200   1606468+  8e  Linux LVM
/dev/sdd2           201       400   1606500   8e  Linux LVM
/dev/sdd3           401       652   2024190   8e  Linux LVM
linux:~ # pvcreate /dev/sdd1 /dev/sdd2 /dev/sdd3
pvcreate -- physical volume "/dev/sdd1" successfully created
pvcreate -- physical volume "/dev/sdd2" successfully created
pvcreate -- physical volume "/dev/sdd3" successfully created

linux:~ # pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/sdd1" is in no VG  [1.53 GB]
pvscan -- inactive PV "/dev/sdd2" is in no VG  [1.53 GB]
pvscan -- inactive PV "/dev/sdd3" is in no VG  [1.93 GB]
pvscan -- total: 3 [4.99 GB] / in use:  [0] / in no VG: 3 [4.99 GB]

linux:~ # vgcreate vgtest /dev/sdd1 /dev/sdd2 /dev/sdd3
vgcreate -- INFO: using default physical extent size 4 MB
vgcreate -- INFO: maximum logical volume size is 255.99 Gigabyte
vgcreate -- doing automatic backup of volume group "vgtest"
vgcreate -- volume group "vgtest" successfully created and activated

linux:~ # vgscan
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- found active volume group "vgtest"
vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
vgscan -- WARNING: This program does not do a VGDA backup of your volume
group

linux:~ # pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- ACTIVE   PV "/dev/sdd1" of VG "vgtest" [1.53 GB / 1.53 GB free]
pvscan -- ACTIVE   PV "/dev/sdd2" of VG "vgtest" [1.53 GB / 1.53 GB free]
pvscan -- ACTIVE   PV "/dev/sdd3" of VG "vgtest" [1.93 GB / 1.93 GB free]
pvscan -- total: 3 [4.99 GB] / in use: 3 [4.99 GB] / in no VG:  [0]

linux:~ # lvcreate -L 100M -n lvtest vgtest
lvcreate -- doing automatic backup of "vgtest"
lvcreate -- logical volume "/dev/vgtest/lvtest" successfully created

linux:~ # lvscan
lvscan -- ACTIVE            "/dev/vgtest/lvtest" [100 MB]
lvscan -- 1 logical volumes with 100 MB total in 1 volume group
lvscan -- 1 active logical volumes

linux:~ # mke2fs /dev/vgtest/lvtest
mke2fs 1.28 (31-Aug-2002)
Filesystem labelOS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
25688 inodes, 102400 blocks
5120 blocks (5.00%) reserved for the super user
First data block=1
13 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

linux:~ # mount /dev/vgtest/lvtest /mnt/
linux:~ # mount
/dev/hda7 on / type ext3 (rw)
proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/hda8 on /boot type ext3 (rw)
shmfs on /dev/shm type shm (rw)
usbdevfs on /proc/bus/usb type usbdevfs (rw)
/dev/vgtest/lvtest on /mnt type ext2 (rw)

linux:~ # touch /mnt/file
linux:~ # ls -al /mnt/
total 17
drwxr-xr-x    3 root     root         1024 Oct 30 13:40 .
drwxr-xr-x   24 root     root         4096 Oct 30 09:08 ..
-rw-r--r--    1 root     root             Oct 30 13:40 file
drwx------    2 root     root        12288 Oct 30 13:39 lost+found

*************************  MIGRATION STEPS START **************************

linux:~ # umount /mnt/
linux:~ # vgchange -a n vgtest
vgchange -- volume group "vgtest" successfully deactivated

linux:~ # vgexport vgtest
vgexport -- volume group "vgtest" sucessfully exported

linux:~ # vgscan
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- found exported volume group "vgtestPV_EXP"
vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
vgscan -- WARNING: This program does not do a VGDA backup of your volume
group

linux:~ # pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/sdd1"  is in EXPORTED VG "vgtest" [1.53 GB /
1.43 GB free]
pvscan -- inactive PV "/dev/sdd2"  is in EXPORTED VG "vgtest" [1.53 GB /
1.53 GB free]
pvscan -- inactive PV "/dev/sdd3"  is in EXPORTED VG "vgtest" [1.93 GB /
1.93 GB free]
pvscan -- total: 3 [4.99 GB] / in use: 3 [4.99 GB] / in no VG:  [0]

linux:~ # cat /etc/raidtab
raiddev /dev/md0
           raid-level              linear
           nr-raid-disks           1
           persistent-superblock   1
           chunk-size              32

           device                  /dev/sdd1
           raid-disk               0

raiddev /dev/md1
           raid-level              linear
           nr-raid-disks           1
           persistent-superblock   1
           chunk-size              32

           device                  /dev/sdd2
           raid-disk               0

raiddev /dev/md2
           raid-level              linear
           nr-raid-disks           1
           persistent-superblock   1
           chunk-size              32

           device                  /dev/sdd3
           raid-disk               0


linux:~ # mkraid -R /dev/md0 /dev/md1 /dev/md2
DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sdd1, 1606468kB, raid superblock at 1606400kB
DESTROYING the contents of /dev/md1 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md1
analyzing super-block
disk 0: /dev/sdd2, 1606500kB, raid superblock at 1606400kB
DESTROYING the contents of /dev/md2 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md2
analyzing super-block
disk 0: /dev/sdd3, 2024190kB, raid superblock at 2024064kB

linux:~ # cat /proc/mdstat
Personalities : [linear]
read_ahead 1024 sectors
md2 : active linear sdd3[0]
      2024064 blocks 32k rounding

md1 : active linear sdd2[0]
      1606400 blocks 32k rounding

md0 : active linear sdd1[0]
      1606400 blocks 32k rounding

unused devices: <none>

linux:~ # pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/md0"  is in EXPORTED VG "vgtest" [1.53 GB /
1.43 GB free]
pvscan -- inactive PV "/dev/md1"  is in EXPORTED VG "vgtest" [1.53 GB /
1.53 GB free]
pvscan -- inactive PV "/dev/md2"  is in EXPORTED VG "vgtest" [1.93 GB /
1.93 GB free]
pvscan -- total: 3 [4.99 GB] / in use: 3 [4.99 GB] / in no VG:  [0]

linux:~ # vgimport vgtest /dev/md0 /dev/md1 /dev/md2
vgimport -- doing automatic backup of volume group "vgtest"
vgimport -- volume group "vgtest" successfully imported and activated

linux:~ # vgscan
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- found inactive volume group "vgtestPV_EXP"
vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
vgscan -- WARNING: This program does not do a VGDA backup of your volume
group

linux:~ # pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/md0"  is in EXPORTED VG "vgtest" [1.53 GB /
1.43 GB free]
pvscan -- inactive PV "/dev/md1"  is in EXPORTED VG "vgtest" [1.53 GB /
1.53 GB free]
pvscan -- inactive PV "/dev/md2"  is in EXPORTED VG "vgtest" [1.93 GB /
1.93 GB free]
pvscan -- total: 3 [4.99 GB] / in use: 3 [4.99 GB] / in no VG:  [0]

linux:~ # vgdisplay -v
--- Volume group ---
VG Name               vgtest
VG Access             read/write
VG Status             available/resizable
VG #                  0
MAX LV                256
Cur LV                1
Open LV               0
MAX LV Size           255.99 GB
Max PV                256
Cur PV                3
Act PV                3
VG Size               4.98 GB
PE Size               4 MB
Total PE              1275
Alloc PE / Size       25 / 100 MB
Free  PE / Size       1250 / 4.88 GB
VG UUID               p63Nem-1fai-k30T-B9wQ-bD7f-3f5T-bfmyS1

--- Logical volume ---
LV Name                /dev/vgtest/lvtest
VG Name                vgtest
LV Write Access        read/write
LV Status              available
LV #                   1
# open                 0
LV Size                100 MB
Current LE             25
Allocated LE           25
Allocation             next free
Read ahead sectors     1024
Block device           58:0


--- Physical volumes ---
PV Name (#)           /dev/md0 (1)
PV Status             available / allocatable
Total PE / Free PE    391 / 366

PV Name (#)           /dev/md1 (2)
PV Status             available / allocatable
Total PE / Free PE    391 / 391

PV Name (#)           /dev/md2 (3)
PV Status             available / allocatable
Total PE / Free PE    493 / 493

linux:~ # vgchange -a y vgtest
vgchange -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please
run vgscan

linux:~ # vgchange -a n vgtest
vgchange -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please
run vgscan

linux:~ # lvscan
lvscan -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please run
vgscan

linux:~ # vgscan
vgscan -- reading all physical volumes (this may take a while...)
vgscan -- found inactive volume group "vgtestPV_EXP"
vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
vgscan -- WARNING: This program does not do a VGDA backup of your volume
group

linux:~ # lvscan
lvscan -- ERROR: VGDA in kernel and lvmtab are NOT consistent; please run
vgscan
==================================================================

Thanks in advance.

Thanks & Regards,
Santosh Rokade

http://www.patni.com
World-Wide Partnerships. World-Class Solutions.
_____________________________________________________________________

This e-mail message may contain proprietary, confidential or legally
privileged information for the sole use of the person or entity to
whom this message was originally addressed. Any review, e-transmission
dissemination or other use of or taking of any action in reliance upon
this information by persons or entities other than the intended
recipient is prohibited. If you have received this e-mail in error
kindly delete  this e-mail from your records. If it appears that this
mail has been forwarded to you without proper authority, please notify
us immediately at netadmin patni com and delete this mail. 
_____________________________________________________________________

Attachment: clip_image002.jpg
Description: image/pjpeg


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]