[Linux-cluster] Colocation of cloned resource instances.

Andrew Beekhof andrew at beekhof.net
Wed Feb 19 23:27:01 UTC 2014


On 20 Feb 2014, at 7:47 am, Vallevand, Mark K <Mark.Vallevand at UNISYS.com> wrote:

> I have a network routing package that needs to be load-balanced across multiple nodes.  This is our proprietary package and I am free to make changes to it to facilitate load-balancing.  I have written a resource agent for the package.  We are currently testing this in a 2-node master/standby configuration.  But, we need load-balancing for the future.
> 
> The package has a RADIUS server part and a network routing part.  It is deployed on nodes that have inward and outward facing NICs.
> 
> The RADIUS part accepts authentication requests from clients in the outward NIC.  When a request is granted, an IP address is assigned and returned to the client which uses it for all subsequent messages.  There is a 'route via' setup as well, so the message hops to the system running the package.  The message is processed and routed through to the inward facing NIC.
> 
> Now, let's put an IPaddr2 clone load-balancer in place so that the messages from all the clients can be spread across several nodes.  We'll use source address hashing so that one client's messages always go to the same node.

Any chance you could document this part once you have it working?
We know its possible but no-one has written up an example of it being used in anger :)

> 
> The RADIUS part is passing out client addresses.  How does it know which node will get the messages from that client?  It needs to know because the routing part of the package needs to initialize with all kinds of client-specific information before it can process and route traffic.  And the package is cloned to several nodes.  One of them must be ready to handle the client traffic.
> 
> I have thought of lots of ways to make this work.  Every one of them requires you to know something about the instance of the IPaddr2 clone.

In the case of an IPaddr2 clone, you need globally-unique=true anyway (as every instance corresponds to a particular bucket and they're not interchangeable).
Is this the case for the RADIUS clone?  It sounds like it... in which case, what you're proposing is less bad than I perhaps thought :-)

I would still recommend not doing this, but there is support for:

	  <optional>
	    <attribute name="rsc-instance"><data type="integer"/></attribute>
	  </optional>
	  <optional>
	    <attribute name="with-rsc-instance"><data type="integer"/></attribute>
	  </optional>

on the colocation constraint when using the experimental pacemaker-1.1 schema (look for validate-with).

>  One idea is to have instance 0 of the IPaddr2 clone collocate with instance 0 of the package clone.  (and 1, 2, etc.)  Then the package agent can get the IP2addr instance number because it's the same as the package instance number.  The IPaddr2 hash is public and understood.  Now, we can know if client IP will hash to a given node.

The problem is the messages arriving on the inward facing NIC, right?
When a request arrives on the outward facing NIC, you can safely process it because will only arrive if IPaddr2 is serving that bucket on that node.

I wonder if there is some way to test the client IP against the currently configured iptables rules... that should tell you if the message should be accepted without needing to know the instance numbers.

> 
> I guess I shouldn't say "every one of them".  There other ways to do this.  I was just wondering if I could make instances of two clones collocate.  Set globally-unique=true, you say?   I'll check into that if I need to pursue this idea.
> 
> But, looking at the IPaddr2 code, it looks like my package agent can get the iptables CLUSTERID node number, which means I don't need instance collocation.
> 
> Long explanation, but you asked.  And, I left out lots of details.
> 
> Back to skinning the cat.
> 
> 
> Regards.
> Mark K Vallevand   Mark.Vallevand at Unisys.com
> May you live in interesting times, may you come to the attention of important people and may all your wishes come true.
> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.
> 
> 
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Andrew Beekhof
> Sent: Tuesday, February 18, 2014 05:35 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] Colocation of cloned resource instances.
> 
> 
> On 19 Feb 2014, at 1:49 am, Vallevand, Mark K <Mark.Vallevand at UNISYS.com> wrote:
> 
>> So, if I really really want to do it, can I?
>> I'm not being snarky.  I'd like to know if it's possible.
> 
> Fair enough. You possibly can if you set globally-unique=true for both clones.
> But that has other drawbacks.
> 
>> 
>> You are forcing me to think of a different solution to my cluster implementation.  Maybe that's good.
> 
> If you're thinking about giving special meaning to clone numbers, it almost certainly is :)
> If you outline the problem you're trying to solve, perhaps someone here will have a suggestion. 
> 
>> 
>> 
>> Regards.
>> Mark K Vallevand   Mark.Vallevand at Unisys.com
>> May you live in interesting times, may you come to the attention of important people and may all your wishes come true.
>> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.
>> 
>> 
>> -----Original Message-----
>> From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Andrew Beekhof
>> Sent: Monday, February 17, 2014 05:45 PM
>> To: linux clustering
>> Subject: Re: [Linux-cluster] Colocation of cloned resource instances.
>> 
>> 
>> On 18 Feb 2014, at 8:32 am, Vallevand, Mark K <Mark.Vallevand at UNISYS.com> wrote:
>> 
>>> I have 2 cloned resources.  I want to make sure that instance 0 of each cloned resource are collocated.  (And instance 1, 2, etc.)
>> 
>> Instance numbers are an implementation detail.  You're not supposed to care.
>> 
>>> 
>>> I'd like to do something like this:
>>>               crm configure colocation name INFINITY: a_clone:0 b_clone:0
>>> Where a_clone is a clone of resource a, etc:
>>> crm configure clone a_clone a meta clone-max=2
>>> Same for b_clone and b.
>>> A and b are primitives:
>>>               crm configure primitive a .
>>> 
>>> Not having much luck.  Advice?
>>> Tried using a_clone:0 and a:0 on the collocation command.
>>> Is this even possible?
>>> 
>>> Regards.
>>> Mark K Vallevand   Mark.Vallevand at Unisys.com
>>> May you live in interesting times, may you come to the attention of important people and may all your wishes come true.
>>> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers.
>>> 
>>> -- 
>>> Linux-cluster mailing list
>>> Linux-cluster at redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>> 
>> 
>> -- 
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 
> -- 
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20140220/32c4b599/attachment.sig>


More information about the Linux-cluster mailing list