[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Looking for advice on a gigabit networking problem

This may not be right forum to ask this question but I am hoping some Fedora buffs will know the right place for me to go with this.

I have a network built around some Cisco switches. There is a 3550-12T and a pair of 2950-48's connected via gigabit links to this switch. I have 7 server boxes connected onto the 3550 via Intel E1000 cards. These servers also have 100 Mbps links to the 2950's over e100 on board nic's. I have the 3 nic's on each server set up under a bonding driver in active backup mode with the gigbit nic as the active master and the 100 Mbps nic as the passive backups. Everything functions as expected with fail over on link failure being 100% transparent ( 8-) ) EXCEPT the bandwidth available down the gig links server to server is at best 288 Mbps! The bond runs a native (untagged VLAN) and a tagged VLAN offering on 2 separate networks. The machines are Intel SRMK4 running dual 1GigaHz Xeon's vintage 2000/2001 with 4GB of memory and the latest stock Fedora 9 PAE kernels.

I have set up iperf tests and thumped as hard as I can on the E1000 tuning parameters and this is the best I can achieve. Now what I want to know is should I be able to do better with this set up. I am wondering if I am hitting limits on the PCI bus, the switching capability of the 3550 or just that the E1000's can't really provide full bandwidth without using jumbo frames (which are not available as the bond driver stops this).

Does anybody have any ideas, or pointers to likely solutions?

Regards, Howard

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]