MSI K8N Neo2 For Dual Core?

Bryan J. Smith b.j.smith at ieee.org
Wed Aug 24 14:56:40 UTC 2005


On Wed, 2005-08-24 at 10:38 -0400, Mark Hahn wrote:
> OK, so what difference do you measure between doing this (affinity vs not)?

It's basically impossible with the current state of Linux.
Linux has grown up on a single point of memory-I/O interconnect.
I'm hopeful some of the Opteron developments I've seen will change that
in the near future.

> and besides, why is the opteron on the hot path?  you're presumably just 
> dma'ing big chunks of ram to/from disk, so why does 20ns inter-opteron
> latency make any difference?

Again, it seems you don't know the first thing about AMD Opteron (let
alone many RISC implementations) versus Intel Xeon or Itanium.

Your memory mapped I/O is local to a processor, which can have the same
affinity as the end-user services.  More than just process affinity, I/O
affinity.

Now you're truly minimize the duplication of data streams over the
interconnect.

> besides, how do you control the numa affinity of the pagecache?

The pagecache is _per_CPU_ in Opteron for I/O, not on the chipset like
Intel AGTL+.

> besides, how could it matter, given that your IO is so vastly slower
> than your memory?

Not when you start dealing with multiple cards on segmented PCI-X busses
(over 1GBps), let alone new HTX solutions including Infiniband.

Now there's a good solution for segmented end-storage and end-system,
using software RAID on the end-storage.  You then use HTX Infiniband to
connect them.

-- 
Bryan J. Smith     b.j.smith at ieee.org     http://thebs413.blogspot.com
----------------------------------------------------------------------
The best things in life are NOT free - which is why life is easiest if
you save all the bills until you can share them with the perfect woman




More information about the amd64-list mailing list