GFS - SAN issue

gcasanova at perenety.com gcasanova at perenety.com
Fri May 19 17:47:38 UTC 2006


Hi,

I am using Red HAT ES 4.0, with Cluster Suite Package Enterprise and the 
GFS. It appears that I am constantly connecting and disconnecting to the 
SAN Storage. I am using SFNet driver Version 4:0.1.11-3 (02-May-2006).
To make myself clearer each time I launch the driver service (SFNET) I 
am constantly connecting and disconnecting to the SAN.
here is my /var/log/messages:

May 19 10:29:09 database-cluster-node1 iscsid[2515]: Connection to 
Discovery Address 205.219.64.1 closed
May 19 10:29:09 database-cluster-node1 iscsid[2515]: Connected to 
Discovery Address 205.219.64.1
May 19 10:29:39 database-cluster-node1 iscsid[2515]: Connection to 
Discovery Address 205.219.64.1 closed
May 19 10:29:39 database-cluster-node1 iscsid[2515]: Connected to 
Discovery Address 205.219.64.1
May 19 10:30:09 database-cluster-node1 iscsid[2515]: Connection to 
Discovery Address 205.219.64.1 closed
May 19 10:30:09 database-cluster-node1 iscsid[2515]: Connected to 
Discovery Address 205.219.64.1
May 19 10:30:39 database-cluster-node1 iscsid[2515]: Connection to 
Discovery Address 205.219.64.1 closed
May 19 10:30:39 database-cluster-node1 iscsid[2515]: Connected to 
Discovery Address 205.219.64.1


So every 30 seconds I am trying to connect and my connection is closed 
then I am reconnected.
I set the connection to the SAN to be constant or continuous because If 
I am disconnected from the SAN, I will like to be connected 
automatically as well.

I thought it was coming from some configuration file and basically this 
one :

/etc/iscsi.conf:
# Timeout options
# Configuration Options

LoginTimeout=25
ActiveTimeout=25
IdleTimeout=180
PingTimeout=15
HeaderDigest=prefer-off
DataDigest=prefer-off

# We want to automate the connection
# If a Connection to the SAN is broken we want to reestablish
# automatically a new connection 
Continuous=yes

DiscoveryAddress=205.219.64.1
        
OutgoingUsername=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
perenety.com
        
OutgoingPassword=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


I do not thing the problem is coming from the cluster set-up. But let me 
describe how it is, I have two servers running the cluster server on 
both machines. One of them is owning the Service (Server A), and the 
other one is a cluster member. When Server B is dead, Server A is 
acknowledging from its dead status. The problem is Server B is not 
taking the relay on the Service that is supposed to be shared  (GFS). 
Only one of Server A and B is able to see the Service running on the 
cluster configuration.

Thanks.





More information about the Redhat-install-list mailing list