[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Low cost storage for clusters



I like hp procurve 2824/2848 as full bisectional bandwidth managed switches that have processors strong enough
to hold up even on jumbo frames.

I have not tested jumbo frames on the 48-port netgear switches, we typically use them when the customer uses infiniband as their real interconnect (thats in HPC clusters). Generally the word is that a low-end switch that might be able to push full linerate at normal package sizes does drop in throughput when using jumbo frames.

Does GFS recommend using jumbo frames? I know the NFS people would.

Michael.

HAWKER, Dan wrote:
On the subject of ethernet switches, are they all made equal ? Obviously I know that some are managed, but what are you getting when you pay large amounts of money for fairly ordinary looking switches ?


All switches are born equal, its just that some are more equal than
others...  :)

Have used a large variety of vendors, for a similarly large variety of
applications/environments and yes, in some areas, sure, you get what you pay
for, however in other areas, some are just taking the piss.

As usual it depends upon your application and how much you value your
data/uptime/non-worktime/sanity. The smaller, traditionally more consumer
orientated vendors, are starting to add features that not even a year or so
ago were strictly the preserve of high end Cisco/Extreme hardware. We have a
few Netgear gigabit switches with fibre that are managed, but have nothing
like the manageability of our HP and Cisco boxes. It's a basic web driven
interface that allows you to fiddle around with VLANs, turn on jumbo-frames,
turn on alerting and generally configure the switch sufficiently to help
smaller environments help with their network.

However it just doesn't have the absolute granularity of configuration
compared to the higher-end switches. Most of the managed features are global
settings, whereas often these could/should/would be nice to apply them at a
port level.

Other than the management, you are into the realms of pure performance and
the hardware quality. Of course, all vendors have lemons that just don't
live up to expectations, but extremely high MTBF and additional hardware
based failover/error correction does add cost. As does the backplane. A
fully populated 24port gig switch creates an awful amount of  data
(theoretically 48Gbit/sec) to shuffle around. Most high-end stuff (and some
low-end too) have non-blocking backplanes, (ie the backplane can handle the
theoretical maximum bandwidth) without having to drop packets. Some can't
claim that. In an environment like a storage fabric or mission critical
database access, dropped packets at the least mean poor performance, at the
worst corrupted data.

Rounding up, no they're not all created equal, however in *your* environment
a low-end switch *may* be appropriate, but equally it may not.

Personally, in a storage fabric (we have an iSCSI box here) I'd spend the
cash. Agree with not having to pay for the Cisco name unless you
particularly need a feature. Personally I really like HP ProCurve kit.
Similar/same/better feature-set but generally cheaper for a similarly
specced Cisco.

HTH

Dan

--

Dan Hawker
Linux Systems Administrator
EADS Astrium



--
Michael Will
Penguin Computing Corp.
Sales Engineer
415-954-2822
415-954-2899 fx
mwill penguincomputing com
[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]