[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Colocation of cloned resource instances.



I have a network routing package that needs to be load-balanced across multiple nodes.  This is our proprietary package and I am free to make changes to it to facilitate load-balancing.  I have written a resource agent for the package.  We are currently testing this in a 2-node master/standby configuration.  But, we need load-balancing for the future.

The package has a RADIUS server part and a network routing part.  It is deployed on nodes that have inward and outward facing NICs.

The RADIUS part accepts authentication requests from clients in the outward NIC.  When a request is granted, an IP address is assigned and returned to the client which uses it for all subsequent messages.  There is a 'route via' setup as well, so the message hops to the system running the package.  The message is processed and routed through to the inward facing NIC.

Now, let's put an IPaddr2 clone load-balancer in place so that the messages from all the clients can be spread across several nodes.  We'll use source address hashing so that one client's messages always go to the same node.

The RADIUS part is passing out client addresses.  How does it know which node will get the messages from that client?  It needs to know because the routing part of the package needs to initialize with all kinds of client-specific information before it can process and route traffic.  And the package is cloned to several nodes.  One of them must be ready to handle the client traffic.

I have thought of lots of ways to make this work.  Every one of them requires you to know something about the instance of the IPaddr2 clone.  One idea is to have instance 0 of the IPaddr2 clone collocate with instance 0 of the package clone.  (and 1, 2, etc.)  Then the package agent can get the IP2addr instance number because it's the same as the package instance number.  The IPaddr2 hash is public and understood.  Now, we can know if client IP will hash to a given node.

I guess I shouldn't say "every one of them".  There other ways to do this.  I was just wondering if I could make instances of two clones collocate.  Set globally-unique=true, you say?   I'll check into that if I need to pursue this idea.

But, looking at the IPaddr2 code, it looks like my package agent can get the iptables CLUSTERID node number, which means I don't need instance collocation.

Long explanation, but you asked.  And, I left out lots of details.

Back to skinning the cat.


Regards.
Mark K Vallevand   Mark Vallevand Unisys com
May you live in interesting times, may you come to the attention of important people and may all your wishes come true.
THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.


-----Original Message-----
From: linux-cluster-bounces redhat com [mailto:linux-cluster-bounces redhat com] On Behalf Of Andrew Beekhof
Sent: Tuesday, February 18, 2014 05:35 PM
To: linux clustering
Subject: Re: [Linux-cluster] Colocation of cloned resource instances.


On 19 Feb 2014, at 1:49 am, Vallevand, Mark K <Mark Vallevand UNISYS com> wrote:

> So, if I really really want to do it, can I?
> I'm not being snarky.  I'd like to know if it's possible.

Fair enough. You possibly can if you set globally-unique=true for both clones.
But that has other drawbacks.

> 
> You are forcing me to think of a different solution to my cluster implementation.  Maybe that's good.

If you're thinking about giving special meaning to clone numbers, it almost certainly is :)
If you outline the problem you're trying to solve, perhaps someone here will have a suggestion. 

> 
> 
> Regards.
> Mark K Vallevand   Mark Vallevand Unisys com
> May you live in interesting times, may you come to the attention of important people and may all your wishes come true.
> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.
> 
> 
> -----Original Message-----
> From: linux-cluster-bounces redhat com [mailto:linux-cluster-bounces redhat com] On Behalf Of Andrew Beekhof
> Sent: Monday, February 17, 2014 05:45 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] Colocation of cloned resource instances.
> 
> 
> On 18 Feb 2014, at 8:32 am, Vallevand, Mark K <Mark Vallevand UNISYS com> wrote:
> 
>> I have 2 cloned resources.  I want to make sure that instance 0 of each cloned resource are collocated.  (And instance 1, 2, etc.)
> 
> Instance numbers are an implementation detail.  You're not supposed to care.
> 
>> 
>> I'd like to do something like this:
>>                crm configure colocation name INFINITY: a_clone:0 b_clone:0
>> Where a_clone is a clone of resource a, etc:
>> crm configure clone a_clone a meta clone-max=2
>> Same for b_clone and b.
>> A and b are primitives:
>>                crm configure primitive a .
>> 
>> Not having much luck.  Advice?
>> Tried using a_clone:0 and a:0 on the collocation command.
>> Is this even possible?
>> 
>> Regards.
>> Mark K Vallevand   Mark Vallevand Unisys com
>> May you live in interesting times, may you come to the attention of important people and may all your wishes come true.
>> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.
>> 
>> -- 
>> Linux-cluster mailing list
>> Linux-cluster redhat com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 
> -- 
> Linux-cluster mailing list
> Linux-cluster redhat com
> https://www.redhat.com/mailman/listinfo/linux-cluster



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]