[Linux-cluster] help on configuring a shared gfs volume in a load balanced http cluster

Alex linux at vfemail.net
Fri Jul 25 09:13:47 UTC 2008


On Thursday 24 July 2008 15:59, gordan at bobich.net wrote:
> So, shd machines are actually SANs. You will need to use something like
> DRBD if you want shd machines mirrored

Hello Gordan,

I am confused because i didn't do this job in a past and have no experience 
with this service. I would like to parse this task using small steps, in 
order to be able to understand what to do..., so my questions comes below:

Actually, i want just to have hdb1 from shd1 and hdc1 from shd2 joided in one 
volume. No mirror for this volume at that momment. Is possible? If yes, how? 
Using ATAoE?

After that, i would like to know, how to install GFS on this volume and use it 
as documennt root on our real web servers (rs1, rs2, rs3). Is possible? If 
yes, how?

I don't understand from your explanation, how to group machines: shd1 and shd2 
should be in one cluster and rs1, rs2 and rs3 in other cluster or: shd1 and 
shd2 shoud be regular servers which is just exporting their HDD using ATAoE 
and rs1, rs2 and rs2 to be grouped in one cluster which are importing a GFS 
volume from somwhere? If yes, from where? How can i configure a GFS volume on 
ATAoE disks and from where will be accesible? I need another one machine 
which will act as agregator for ATAoE disks or our real web servers (rs1, 
rs2, rs3) will responsible to import directly these disks?

> and ATAoE or iSCSI to export the 
> volumes for the rs machines to mount.

In our lab we are using regular hard disks, so iSCSI is excluded.

I read an article here (http://www.linuxjournal.com/article/8149) about ATAoE 
and i have some questions:

- on our centos 5.2 boxes, we already have aoe kernel module but we don't have 
aoe-stat command. Is any packet shoud i install via yum to have this command 
(or other command to hadle aoe disks) or is required do download   
aoetools-26.tar.gz and compile from source 
(http://sourceforge.net/projects/aoetools/)

- in above article they are talking about RAID10, LVM and JFS. They are not 
teaching me about GFS and clustering. They choose JFS and not GFS saying that 
"JFS is a filesystem that can grow dynamically to large sizes, so he is going 
to put a JFS filesystem on a logical volume". I want that but using GFS, is 
possible or not?

They are saying that:

"using a cluster filesystem such as GFS, it is possible for multiple hosts on 
the Ethernet network to access the same block storage using ATA over 
Ethernet. There's no need for anything like an NFS server"

"But there's a snag. Any time you're using a lot of disks, you're increasing 
the chances that one of the disks will fail. Usually you use RAID to take 
care of this issue by introducing some redundancy. Unfortunately, Linux 
software RAID is not cluster-aware. That means each host on the network 
cannot do RAID 10 using mdadm and have things simply work out."

So, finally, what should i do? Can you or anybody suggest me some howtos and 
what is the correct order to group machines and implement clustering?

Regards,
Alx

>
> Then create a shared GFS on the ATAoE/iSCSI device.
> You may, however, find that for web servers (lots of small files, frequent
> access to same files from all nodes) NFS/NAS gives you better performance,
> with shds configured mirrored for fail-over by not load balanced (warm
> standby).
>
> If you need very high performance / low latencies from storage, you may
> want to look into something like seznamfs for replicating content from a
> single master server to multiple slaves (DAS).
>
> Gordan
>
> On Thu, 24 Jul 2008, Alex wrote:
> > Hello cluster experts,
> >
> > I'm new here and new to cluster world too... I need some help, in order
> > to setup a cluster in our organization.
> >
> > Shortly, our schema is:
> >
> > 2 routers for HA and load balancing
> > - ar (active router)
> > - br (backup router)
> >
> > 3 http servers located internaly acting as real web servers (rs1, rs2,
> > rs3) behind
> > ar and br routers.
> > rs1=192.168.113.3/24
> > rs2=192.168.113.4/24
> > rs3=192.168.113.5/24
> >
> > 2 shared data servers (shd1, shd2)
> > shd1=192.168.113.6/24
> > shd1=192.168.113.7/24
> >
> > 1 server for cluster management (rhclm)
> > rhclm=192.168.113.8/24
> >
> > I've configured ar and br routers for high availability and load banacing
> > and everything is ok. Active router (ar) are forwarding http requests to
> > VIP (floating) external ip address to internaly ip addresses of rs1, rs2,
> > rs3 webservers.
> >
> > Now, i don't know how to:
> > - configure and group some hard disks on our shd1 and sdh2 servers to
> > form a shared volume for our rs1, rs2, rs3 real servers (i suppose that
> > the correct topic should be shared volume using GFS...)
> > - make usable this volume and act as DOCUMENT ROOT on our rs1, rs2 and
> > rs3 webservers.
> >
> > All our servers are running centos 5.2 and has all updates installed.
> >
> > On rhclm (192.168.113.8) i installed cana and created a cluster with 2
> > nodes: shd1 and shd2.
> >
> > Cana, generated the following cluster.conf on shd1 and shd2 servers:
> >
> > [root at shd1 ~]# cat /etc/cluster/cluster.conf
> > <?xml version="1.0"?>
> > <cluster alias="wwwdata" config_version="2" name="wwwdata">
> >        <fence_daemon clean_start="0" post_fail_delay="0"
> > post_join_delay="3"/>
> >        <clusternodes>
> >                <clusternode name="192.168.113.7" nodeid="1" votes="1">
> >                        <fence/>
> >                </clusternode>
> >                <clusternode name="192.168.113.6" nodeid="2" votes="1">
> >                        <fence/>
> >                </clusternode>
> >        </clusternodes>
> >        <cman expected_votes="1" two_node="1"/>
> >        <fencedevices/>
> >        <rm>
> >                <failoverdomains/>
> >                <resources/>
> >        </rm>
> >        <totem consensus="4800" join="60" token="10000"
> > token_retransmits_before_loss_const="20"/>
> > </cluster>
> >
> > Now, on shd1 i am using hda for centos OS and hdb (1,2) i want to make it
> > available to be used on shared volume:
> >
> > [root at shd1 ~]# cat /proc/partitions
> > major minor  #blocks  name
> >   3    64   39082680 hdb
> >   3    65   19541056 hdb1
> >   3    66   19541592 hdb2
> > [root at shd1 ~]#
> >
> > on shd2 i have hda for centos and hdc (1,2) i want it available to be
> > used on shared volume:
> > [root at shd2 ~]# cat /proc/partitions
> > major minor  #blocks  name
> >  22     0   78150744 hdc
> >  22     1   39075088 hdc1
> >  22     2   39075624 hdc2
> > [root at shd2 ~]#
> >
> > Using cana, i couldn't find a way to create a volume, grouping hdb1 (from
> > shd1) together with hdc1 (from sdh2) in one volume. I want to do this for
> > 2 reasons:
> > - i want that volume to be mounted as document root on rs1, rs2, rs3 real
> > webservers
> > - i want that volume to be easy to extend adding new hdd on the fly of
> > other computers to this volume (new hdd slices of other new computers).
> >
> > Can anybody tell me how can i do it?
> >
> > I don't know that for this design if correct to have:
> > - all 5 servers (rs1, rs2, rs3, shd1, shd2) to be configured as nodes in
> > the same cluster
> > or
> > - rs1, rs2, rs3 to be part of one cluster and shd1 and shd2 to form
> > another cluster
> >
> > I read section: A.2. Configuring Shared Storage in this document
> > http://www.centos.org/docs/5/html/Cluster_Administration/ap-httpd-service
> >-CA.html but is not what i want.
> >
> > Can anybody help me. A link pointing me to the correct direction or a
> > howto will be appreciated.
> >
> > Regards,
> > Alx
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list