[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] lvm upgrade problems.



You probably need to start up the cluster infrastructure. cman, ccsd, fencd, and clvmd.

This is probably not a good idea, but you can also turn off LVM2 locking with:
lvmconf --disable-cluster
you can turn LVM2 locking back on with:
lvmconf --enable-cluster --lockinglibdir /usr/lib --lockinglib liblvm2clusterlock.so
The lvmconf command edits /etc/lvm/lvm.conf.

The errors 'connect() failed on local socket: Connection refused' appear from LVM2 commands when clvmd is not running and lvm.conf is configured for a cluster.

I am not sure why the device name does not show up.  After turning off locking, you could try to do a 'vgchange -ay' and hopefully it will appear. 

Matt

On Tue, 2007-05-22 at 20:51 -0400, jason monsterjam org wrote:
hey list. Im running 
[root tf2 ~]# uname -a
Linux tf2.localdomain 2.6.9-55.ELsmp #1 SMP Fri Apr 20 17:03:35 EDT 2007 i686 i686 i386 
GNU/Linux
[root tf2 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux AS release 4 (Nahant Update 5)
[root tf2 ~]# 

and I have a lvm volume created on a GFS formatted drive that I cant see anymore..

[root tf2 ~]# vgscan
  connect() failed on local socket: Connection refused
  connect() failed on local socket: Connection refused
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  Reading all physical volumes.  This may take a while...
  Skipping clustered volume group diskarray
[root tf2 ~]# pvscan 
  connect() failed on local socket: Connection refused
  connect() failed on local socket: Connection refused
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  PV /dev/sdb1   VG diskarray   lvm2 [136.48 GB / 6.48 GB free]
  Total: 1 [136.48 GB] / in use: 1 [136.48 GB] / in no VG: 0 [0   ]
[root tf2 ~]#

whats more, the device name used to be 
/dev/diskarray/lv1
now, all I see is 

[root tf2 ~]# ls -al /dev/disk/by-path/*
lrwxrwxrwx  1 root root  9 May 22 15:34 /dev/disk/by-path/pci-0000:00:1f.1-ide-0:0 -> ../../hda
lrwxrwxrwx  1 root root  9 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0 -> 
../../sda
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0-part1 -> 
../../sda1
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0-part2 -> 
../../sda2
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0-part3 -> 
../../sda3
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0-part4 -> 
../../sda4
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0-part5 -> 
../../sda5
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0-part6 -> 
../../sda6
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0-part7 -> 
../../sda7
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0-part8 -> 
../../sda8
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:02:0e.0-scsi-0:1:0:0-part9 -> 
../../sda9
lrwxrwxrwx  1 root root  9 May 22 15:34 /dev/disk/by-path/pci-0000:03:0b.0-scsi-0:2:0:0 -> 
../../sdb
lrwxrwxrwx  1 root root 10 May 22 15:34 /dev/disk/by-path/pci-0000:03:0b.0-scsi-0:2:0:0-part1 -> 
../../sdb1
[root tf2 ~]# 


/dev/sdb1 is my disk array..

any ideas?

Jason


_______________________________________________
linux-lvm mailing list
linux-lvm redhat com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]