Fedora SMP dual core, dual AMD 64 processor system

Bill Broadley bill at cse.ucdavis.edu
Wed Aug 17 00:39:53 UTC 2005


> The nForce4 chipset, like in the new crop of Socket-939
> solutions, are clearly desktop/workstation.  The nForce Pro
> 2200 and, optional, 2050 (2200+2050) are more
> workstation/server designed, and found in even the single
> Socket-940 Foxconn mainboard I posted.  But even then, all

Hrm, I've heard the nforce4 and 2200 are the same silicon, just a few
tweaks in the packaging.  Much like the opteron/AMD64 difference.

> versions of the nForce series lacks PCI-X, which is a problem
> for servers right now.

Why?

> Because if you want server I/O, you want PCI-X right now. 
> There are very few (if any?) mainboards with a single
> Socket-940 that has a AMD8131/8132 IC for dual-channel PCI-X
> 1.0/2.0.  And even some dual-Socket-940 mainboards lack one.

Correct, although how many servers really need more than 1GB/sec,
thats quite a bit even with say 16 drives hooked up.

> Although PCIe is definitely good for storage and other I/O as
> well as video, the only "intelligent" RAID storage controller
> I know of for PCIe is the LSI Logic MegaRaid 320-2E
> (2-channel U320, PCIe x8 card).  It's actually using the
> IOP332 which is a "hack" of the IOP331 with a PCI-X to PCIe
> bridge (not ideal).

Sure using a bridge is not ideal, some extra transistors, and probably
a few 100 ns delay or similar.  But other than aesthetics why should
anyone care?

Intel makes reliable raid controllers and many a raid card uses them to
provide what the market wants.

Seems rather strange to advocate pci-x just because a popular pci-e
solution uses a bridge.  The migration is happening there are both
AGP video cards with a pci-e bridge, and pci-e video cards with an agp
bridge... who cares?  Even with video cards that are much more sensitive
to bandwidth and latency there is no practical difference.

Keep in mind the IOP has an internal bridge from what I can tell, the
video cards I'm familiar with have an external bridge.

> Now there are some PCIe cards "in the works."  A new series
> of RAID cards should show up using the Broadcom BCM8603 soon.
>  It's an 8-channel SAS (8m, 300MBps Serial Attached SCSI,
> also naturally capable of 1m, 300MBps SATA-IO**) hardware
> RAID controller that can arbitrate _directly_ to either PCI-X
> or PCIe x8 (and can even bridge between the two for more
> embedded solutions) and up to 768MB of DRAM.  It's not like
> Broadcom's current "software" driver RAIDCore PCI-X cards,
> it's a true, intelligent IC for $60 in quantity (meaning
> boards should be ~$300+).  And it's universal SAS/SATA and
> PCI-X/PCIe support makes it an "universal solution" for all
> to use.

Personally I'd rather have JBOD, I've yet to see a Raid controller
faster than software RAID.  I also like the standardized interface so
if I have a few dozen servers I don't have to track the functionality
I want via different command line tools, serial ports, web interfaces,
front panels, and even custom window interfaces.

Not to mention the biggie, what happens if the raid card dies... being
able to migrate to a random collection of hardware can be quite useful
in an emergency.

> So the question is what I/O do you need now?  The Foxconn can
> definitely handle a lot of I/O, but it's only PCIe.  That's
> good for getting new PCIe x4 server NICs, but the PCIe x8
> storage NICs are virtually non-existant right now.  I'm

I bought one, many vendors sell them, what is the big deal?  Megaraid,
tekram, lsi-logic, promise, even straight from intel if you want.
Of course many more are coming from the likes of ICP-vortex, Adaptec,
and just about anyone who wants to relabel and market an adapter in
this space,

> hoping that changes soon with the BCM8603 IC being adopted,
> but I haven't heard a thing yet.
> 
> Which means that PCI-X is probably your best bet for servers.

Why?  Slower, higher latency, lower scaling, and lower bandwidth.  Not to
mention the chipset quality has gone up.  I.e. even the same chip (pci-x
or pci-e with or without a bridge) tends to be faster/lower latency
under pci-e than pci-x.  From what I can tell it's mostly the quality
of the new nvidia chipset that is the difference.  Interconnect vendors
(again more sensitive to latency and bandwidth) are quite excited to be
posting new numbers with the nvidia chipset.

Not that it matters that much for a collection of 8-16 30-60MB disks
on a server.

-- 
Bill Broadley
Computational Science and Engineering
UC Davis




More information about the amd64-list mailing list