[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Load balancing clustered services

Both the RHCS page and your assessment are correct.  Keep in mind that RHCS / GFS provide the host framework for applications to leverage for high availability and/or scalability  -- simply installing and running them alone are not enough.

What you use for hardware, what your application is capable of doing within this environment, and its IMPLEMENTATION determine whether you are attempting to achieve high availability and/or scalability.  At the very least, you will have a better solution in place than running a single monolithic server.

A simple example, if you run a monolithic database instance, and simply want fail-over to another node, the resource group manager's policy for that clustered services can do this for you -- without any manual intervention -- such as moving its IP and disk resources and restarting the database.  That IS THE BEST availability you can get out of such a design.  And this does nothing to increase scalability.

But expand this use-case by implementing a database that was built for high availability -- such as Cache ECP or Oracle RAC -- then such an outage on one node (planned or unplanned) will be managed by RHCS / GFS architecture to provide for 100% uptime.  But, you also get scalability as a positive outcome from this same infrastructure AND implementation.

We are using RHCS / GFS to manage a Cache ECP environment.  The production application / database is not split yet into multiple tiers, but it is shadowed for quick fail-over.  Until it is broken up over several servers, we will never achieve ~100% uptime ... there will always be that downtime during service transitioning, planned or unplanned.

We are also using RHCS only to front-end Peoplesoft (BEA WebLogic) for high-availability, but implemented as an active-active server pair.  True, it also serves for scalability, even though a single server can easily handle our load.  But if something happens to one server (runaway process(es) from a bad script, bad application, etc.), we can shut it down without interrupting service.

Hope this helps.

Robert Hurst, Sr. Caché Administrator
Beth Israel Deaconess Medical Center
1135 Tremont Street, REN-7
Boston, Massachusetts   02120-2140
617-754-8754 ∙ Fax: 617-754-8730 ∙ Cell: 401-787-3154
Any technology distinguishable from magic is insufficiently advanced.

On Sun, 2008-08-17 at 17:03 -0400, Jeff Sturm wrote:
The Red Hat Cluster Suite page says the following:

 "For applications that require maximum uptime, a Red Hat Enterprise
Linux cluster with Red Hat Cluster Suite is the answer. Specifically
designed for Red Hat Enterprise Linux, Red Hat Cluster Suite provides
two distinct types of clustering:

    * Application/Service Failover - Create n-node server clusters for
failover of key applications and services
    * IP Load Balancing - Load balance incoming IP network requests
across a farm of servers"

The implication seems to be that the first type addresses high
availability, and the second scalability.  What is the optimal way to
get both?

Please understand that I am already a user of GFS and LVS.  I'm asking
the question because the two seemingly have nothing in common.  For
example, cman knows about cluster membership and can immediately react
when a node leaves the cluster or is fenced.  On the other hand, LVS
(together with either piranha or ldirectord) keeps a list of real
servers, periodically checking each and removing any found to be

It seems like there are a couple drawbacks to this bifurcated design:

- once cman realizes a node has left the cluster, there is a delay
before ipvs updates its configuration, during which user requests can be
routed to a dead server
- two distinct sets of cluster configurations have to be maintained

Am I misunderstanding something fundamental, or is that the way it is?


Linux-cluster mailing list
Linux-cluster redhat com

Attachment: smime.p7s
Description: S/MIME cryptographic signature

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]