RHEL 6.1 High Performance Network with MRG Messaging: Throughput & Latency 1-GigE, 10-GigE and QDR InfiniBand


This study measured the throughput and 2-hop fully reliable latency of MRG Messaging V1.3 (using AMQP 0-10 protocol) using 8 to 32768-byte packets running on Red Hat Enterprise Linux 6.1.

Three physical interconnects were used. A single protocol, TCP, was used with the 1-GigE. Two protocols were used with the 10-GigE interconnects and QDR InfiniBand (IB) interconnect. Internet Protocol over InfiniBand (IPoIB) allows users to take partial advantage of IB's latency and throughput using pervasive Internet Protocols (IP). Using the RDMA protocol allows users to take full advantage of the latency and throughput of the IB interconnect however, the cost is loss of the IP application and programming interfaces.