[Linux-cluster] partitioning of filesystems in cluster nodes

RR ranjtech at gmail.com
Thu Jun 22 14:13:59 UTC 2006


Ok thanks. Got it! So I'm assuming then the order of configuration would be
the following?

- Configure the iSCSI SAN with whatever volumes I need to make available to
my cluster nodes
- Connect up my GigE NICs to the SAN via isolated network switches
- Install RHEL on all my cluster nodes
- Install the iSCSI initiator on each of each of my nodes and configure
iscsi.conf, start the iscsi service such that I can see these volumes on the
SAN.
- Install/Configure CSGFS on each of my cluster nodes with whatever fencing
and lock management scheme I want to go with (I have the WTI power switches
+ each server has an RSA II card in it, so I'll probably go with the RSAs)
- Start the cluster daemons and GFS service.
- use gfs_mkfs to create my filesystems on the shared volumes on the iSCSI
SAN
- Mount these filesystems on each of the node

 And I'm done?

Please could someone verify this order of things. I'm trying to read these
cluster suite manuals and it's humongous and it talks about so much stuff
that it all sounds WAY more complicated than I would've thought about from
just common sense. So I'm trying to get it into the basic core steps.
Anything I'm missing? Or any gotchas anyone can think of?

Thank you all esp. Bowie, Greg and Bob for responding so far.

Kind Regards,
RR

-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Bowie Bailey
Sent: Thursday, June 22, 2006 11:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] partitioning of filesystems in cluster nodes

RR wrote:
> Right, it is indeed what I want to do. But now let me understand the
> basics of GFS. GFS actually runs on the SAN but the GFS
> drivers/software that I install on each of my cluster node just
> allows each of these nodes to see these volumes? Something analogous
> to say iscsi-initiators on a node to via the LUNs on an iSCSI SAN? If
> that's true, then is it possible for me to say have my /opt/local
> installed on the GFS managed filesystem on the SAN such that whatever
> application is installed once in this directory can be accessed by
> all  nodes mounting that filesystem? So kind of get install once, use
> everywhere kind of a deal? 

GFS is a filesystem similar to ext3 or xfs.  The only difference is
that it is cluster-aware and can be accessed by multiple nodes at the
same time without data corruption.

The software/hardware that you install on each node to allow it to see
the volumes could be an iscsi initiator or something else depending on
your physical storage.

You can put /opt/local on the GFS and use it like you suggest as long
as all of the programs installed there are self-contained and do not
rely on libraries installed elsewhere than may not be available on all
of the nodes.

Also, let me clarify the LVM/raid thing.  The only limitation is that
you cannot export two partitions from your shared storage and then put
them into a software raid partition.  This is because the software
raid subsystems are not cluster-aware and each node would be trying to
do it's own raid setup which would cause data corruption.  If the
partitions are not part of the shared storage, you can do whatever you
want with them.

-- 
Bowie

--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list