Other things I have forgot in the last message: 1- You dont need to use the SAN to have one ACTIVE/PASSIVE cluster, with data replication between the servers. Check out DRBD 0.7 + Heartbeat or RH-Cluster-Suite. This will be able to do the job. without the cost of one external storage device. a good point to start is: 2- RedHat are developing one Cluster Raid aproach that will be better than drbd because it will be possible to create a distributed raid to split the storage between the servers. I dont know how the development of this draid is going or when it will be made stable. Does anyone from RH can tell something about this topic ? Some links for you: a - Official documentation http://www.drbd.org/documentation.html b - drbd instalation Portuguese: http://guialivre.governoeletronico.gov.br/mediawiki/index.php/DocumentacaoTecnologiasDRBD English: http://www.linux-ha.org/DRBD/HowTo/Install http://linux-ha.org/DRBD/QuickStart07 c - drbd + heartbeat integration http://www.slackworks.com/~dkrovich/DRBD/heartbeat.html http://www.linux-ha.org/GettingStarted/DRBD best regards Leonardo Rodrigues de Mello -----Mensagem original----- De: Leonardo Rodrigues de Mello em nome de Leonardo Rodrigues de Mello Enviada: qua 26/7/2006 09:36 Para: linux clustering Cc: Assunto: RES: [Linux-cluster] GFS Performance advise Gfs is only necessary if you have two or more machines that access (READ+WRITE) the filesystem at the same time. GFS will create and manage a global lock of the filesystem, and other things to make shure the filesystem can be shared among the cluster nodes without filesystem corruption. beside that fact, if you have active/passive you can use ext3 without any problem. you can use gfs no_lock too. if you are having problems with gfs no_lock maybe because something is misconfigured in your setup. You CANT use gfs no_lock the same way you use gfs with dlm or gulm because if you do that you can get a filesystem corruption... i dont know if gfs permit one configuration like that. best regards Leonardo Rodrigues de Mello -----Mensagem original----- De: linux-cluster-bounces redhat com em nome de Tomer Okavi Enviada: qua 26/7/2006 03:17 Para: linux-cluster redhat com Cc: Assunto: [Linux-cluster] GFS Performance advise I've a samba file server cluster (Active\Passive) with 2 cluster nodes on Cent OS 4.3 both nodes are connected to a shared storage through Fiber switch+HBA the shared storage holds the file system that samba shares to the windows machines. only one cluster node mounts the file system (the active one) currently I'm using ext3 as the file system on the shared storage because I've experienced slow response time and locking problems from the samba service. I've tried formatting the shared file system with GFS disabling locks (lock_nolock), tried mounting the file system with lockproto=lock_nolock,localchaching,localflocks with no success, samba still complains about oplocks breakes and the windows system connecting to the shares experience slow performance from samba. the samba file system exports the file system to 3 IIS servers through unc path's it's dealing with lots (1,000,000) of small (under 250KB) files. when using ext3 as the file system for the samba shares i have no problem. 1. should i use GFS for the file system?, to avoid file system corruption in case one cluster node crash or is ext3 is a good enough solution? 2. why when using GFS with lockproto=lock_nolock,localchaching,localflocks i still see "glock nq calls" and "lm_lock calls" in gfs_tool counters my main goal is to achieve maximum samba performance with the lowest chance for file system corruption in case of a failover or crashed cluster node. thanks Tom Ok.