[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] Using GFS on a hybrid system

Hash: SHA1


We have a RHEL hybrid system, which has 8 servers in it.

First of all, let me draw a picture of the system:

Due to a binary driver problem (IBM!), we had to install RHEL ES 4 U1 to 4 servers (Let's call them S1, S2, S3 and S4) . The other ones have RHEL AS 4 U2 (S5,S6,S7,S8) . The ESU1 ones have GFS 6.0 and the other ones have 6.1. They are connected to a SAN.

2 of the ASU2 ones are using a seperate partition in SAN, and I had no problem in clustering and mounting the systems.

S3 and S4 will work as a Cluster. S1, S2, S5 and S6 are standalone servers.

S1,S2,S5 and S6 needs shared access to the LVM#1.
S1,S2,S3 and S4 needs shared access to another partition in SAN.
S1,S2,S5 and S6 needs shared access to the LVM#2.

The problem arose when we wanted to share LVM#1. We mkfs'ed LVM#1 using GFS 6.1 from S6. It is ok when we mount the LVM from S5 and S6. As we want to access data from S1 and S2, S5 and S6 ooopes and we need to reboot the servers, even if we mount with -o oopses_ok.

Now the questions:

* What should be the cluster.conf files for S1...S6? Should they have the same cluster name? * Is using GFS 6.0 and 6.1 dangerous? I have to use 6.0 in ESU1 servers. Should I rollback to RHEL AS 4 U1 on the U2 systems?

I wanted to ask the list before getting help from Red Hat, for Google to catch the answer and possibly help other people who may need it.

Any help/comment is appreciated.

- --
Kivi Bili┼čim Teknolojileri         -          http://www.kivi.com.tr
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
Version: GnuPG v1.4.1 (GNU/Linux)


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]