[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: [Linux-cluster] Cluster Networks

> -----Original Message-----
> From: linux-cluster-bounces redhat com 
> [mailto:linux-cluster-bounces redhat com] On Behalf Of Paul Dugas
> Sent: Monday, March 30, 2009 7:06 AM
> To: Linux-Cluster Mailing List
> Subject: [Linux-cluster] Cluster Networks
> I've a few machines sharing a couple GFS/LVM volumes that are 
> physically on an AOE device.  Each machine has two network 
> interfaces; LAN and the AOE SAN.  I don't have IP addresses 
> on the SAN interfaces so the cluster is communicating via the LAN.
> Is this ideal or should I configure them to use the SAN 
> interfaces instead?  

It depends.  Is it your wish to maximize throughput or availability?

One consideration is MTU.  Given a standard blocksize of 4k on Linux,
AoE initiators benefit from jumbo frames, since a complete block can be
delivered in one packet.  On the other hand, packets from
openais/lock_dlm are generally quite small and do not fragment in a
normal MTU.

If you are able to run jumbo frames on all your network interfaces, AoE
can use any interface and benefit from the extra thoughput.  If however
your switch ports are not configured for jumbo frames, you may be better
off keeping separate interfaces for the two, unless the additional
throughput isn't important to you.

For maximum uptime, you can multipath AoE over two interfaces, so that
if a single interface were to fail, traffic will resume on the other.
Multipath isn't available for openais (I believe it is implemented but
not supported) but you can run a bonded ethernet interface to achieve
similar results.  An active/passive bonded pair connected to two
separate switches would give you protection from failure of a single
switch/cable/iface, which is very nice for a cluster, because you can
design the network for no single point of failure (depending also on
your power configuration).

If you can run both the SAN/LAN on jumbo frames, and multipath AoE, you
can get very nice throughput.  With the latest AoE driver, an updated
e1000 driver, and some network tuning, we can sustain 190MB/s AoE
transfers on our test network.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]