[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] corosync issue with two interface directives

Ben, I'm afraid you're completely missing the distinction between
internal cluster communications (the "interface" definitions in
corosync.conf), and the clients' communications with networked cluster

On Mon, Feb 6, 2012 at 5:34 PM, Ben Shepherd <bshepherd voxeo com> wrote:
> Basically traffic of both types comes in from BOTH networks.
> We send the traffic to the VIP's on each network.
> These VIPS will be held by the Active server.
> Traffic will go to Server 1 on both Network1 and Network2.

When you say Network1 and Network2, does that mean two network
interfaces connected to two distinct subnets?

> If we lose either the interface to Network1 or the interface to Network2
> we need to fail over the VIP's to the other server.

That's what connectivity monitoring is for, which is a cluster
service. Corosync doesn't concern itself with that; Pacemaker will
manage it. The ocf:pacemaker:ping resource agent was designed for that

> We cannot keep the VIP on the active server if 1 of the networks is not
> working as an entire service will go down.
> Yes I would prefer a single ring with 2 interfaces...that fails over if
> either interfaces reports a problem.

No you don't; you always want your cluster to communicate over as many
rings as possible. You want your cluster resource manager to fail over
if there is a problem on the upstream network.

I hope this helps. Try to think of cluster communications and cluster
resource management as two distinct layers in the stack.


Need help with High Availability?

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]