[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Using ext3 on SAN

Manish Kathuria wrote:
I am working on a two node Active-Active Cluster using RHEL 5.2 and
the Red Hat Cluster Suite with each node running different services. A
SAN would be used as a shared storage device. We plan to partition the
SAN in such a manner that only one node will mount a filesystem at any
point of time. If there is a fail over, the partitions will be
unmounted and then mounted on the other node. We want to avoid using
GFS because of the associated performance issues. All the partitions
will be formatted with ext3 and the cluster configuration will ensure
that they are not mounted on more than one node at any given point.

Could there be any chances of data loss or corruption in such a
scenario? Is it a must to use a clustered file system if a partition
is going to be mounted at a single node only at any point of time? I
would be glad if you could share your experiences.

With your setup, from failover point of view, there is no difference between using ext3 or GFS1/2. Ext3 should work fine as long as it is only mounted on one node at one given time. - there will be no corruption (unless there are unexpected bugs). However, there are possibilities of data loss, regardless it is an ext3, GFS1, or GFS2.

All the filesystems mentioned here are journaling filesystems where it guarantees no meta-data corruption upon unclean shutdowns (with the help of journal replay). However, none of them can ensure no data lost. The data that is left beyond in filesystem cache could get lost upon failover.

You have to explicitly mount the filesystem with "sync" option (with significant performance hit) to ensure no data lost. If you mount with data-journaling mode (check "man mount" and look for the explanation of "data=journal" ), the possibility of data lost would be low but, still, no guarantee.

Most of the proprietary NAS offerings (e.g. Netapp filer via NFS) in the market have embedded NVRAM HW to avoid this issue.

-- Wendy

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]