[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] Cmnd failed-retry the same path



We have a 2 node cluster, with qlogic san drives and GFS filesystem.
When the system boot we get a message in dmesg:
qla2300 0000:03:0b.0:
 QLogic Fibre Channel HBA Driver: 8.01.02-d4
  QLogic QLA2340 -
  ISP2312: PCI-X (133 MHz) @ 0000:03:0b.0 hdma+, host#=1, fw=3.03.18 IPX
  Vendor: IBM       Model: 1815      FAStT   Rev: 0914
  Type:   Direct-Access                      ANSI SCSI revision: 03
qla2300 0000:03:0b.0: scsi(1:0:0:1): Enabled tagged queuing, queue depth 32.
  Vendor: IBM       Model: 1815      FAStT   Rev: 0914
  Type:   Direct-Access                      ANSI SCSI revision: 03
qla2300 0000:03:0b.0: scsi(1:0:0:2): Enabled tagged queuing, queue depth 32.
  Vendor: IBM       Model: 1815      FAStT   Rev: 0914
  Type:   Direct-Access                      ANSI SCSI revision: 03
qla2300 0000:03: 0b.0: scsi(1:0:0:3): Enabled tagged queuing, queue depth 32.
  Vendor: IBM       Model: 1815      FAStT   Rev: 0914
  Type:   Direct-Access                      ANSI SCSI revision: 03
qla2300 0000:03:0b.0: scsi(1:0:0:4): Enabled tagged queuing, queue depth 32.
scsi2 : mpp virtual bus adaptor :version:09.01.B5.30,timestamp:Tue Apr 18 08:34:11 CDT 2006
  Vendor: IBM       Model: VirtualDisk       Rev: 0914
  Type:   Direct-Access                      ANSI SCSI revision: 03
scsi(2:0:0:0): Enabled tagged queuing, queue depth 30.
  Vendor: IBM       Model: VirtualDisk       Rev: 0914
  Type:   Direct-Access                      ANSI SCSI revision: 03
scsi(2:0:0:1): Enabled tagged queuing, queue depth 30.
SCSI device sdb: 104857600 512-byte hdwr sectors (53687 MB)
SCSI device sdb: drive cache: write back
SCSI device sdb: 104857600 512-byte hdwr sectors (53687 MB)
SCSI device sdb: drive cache: write back
 sdb:<4>493 [ RAIDarray.mpp]DS4800_AM:1:0:1 Cmnd failed-retry the same path. vcmnd SN 680 pdev H1:C0:T0:L1 0x06/0x8b/0x02 0x08000002 mpp_status:1
 
The last line we see in messages when both systems read from the GFS filesystem, the read performance is very low on that moment.
 
We are using linuxrdac-09.01.B5.30
 
Any has a solution?
 
Regards
Mels
 

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]