[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [K12OSN] Re: Networking a new school for K12LTSP?

Pretty much all of the fiber ones, which is about all I use, are that way.  One good one that I've found works well with Linux is Amer.com's C1000SX.  It is available here:


Another one is the Intel Pro/1000SX card, which is 64bit/66MHz.  These have been around for a long time, and I have used them in many a GNU/Linux box.  Just do a search on www.pricewatch.com, using the terms "intel gigabit fiber", and I find them reasonably priced.

As for copper cards, those typically are integrated into the motherboard nowadays, even on client box models.  That's a question that the motherboard manufacturer should answer--on which bus, and at what bit/speed rate, is their integrated 10/100/1000 network interface(s)?  Tyan and MSI both are known to put the copper Gig-E interface either on the PCI-32 bus (lower-end mobos) or, in Tyan's case, also on the PCI-X bux (higher end, co$t$ more).  If it's the latter, you're in good shape.  If it's the former, but you're running your other Gig-E cards on the PCI-X bus, then other than IDE hard disk contention (also on the PCI-32 bus, but not usually constant), you probably will be fine.  Of course, I'd be looking at SATA or SCSI hardware raid on PCI-X if the budget allows for it.

Do you GNU!?
Microsoft Free since 2003--the ultimate antivirus protection!

Petre Scheie wrote:

Terrell Prudé Jr. wrote:
Robert Arkiletian wrote:
On 1/31/07, Robert Arkiletian <robark gmail com> wrote:
On 1/31/07, Petre Scheie <petre maltzen net> wrote:
Terrell Prudé Jr. wrote:
Robert Arkiletian wrote:
On 1/29/07, Joseph Bishay <joseph bishay gmail com> wrote:

I hope you are doing well.

Thank you all for the comprehensive reply!

Once I started reading your email, I realized that probably the
way to proceed was to work with the idea of NIC Bonding or port
trunking.  I have a surplus of Gigabit cards so I could put 3 in a
server (reading online I found that more than 3 wasn't going to
enough of an improvement due to the PCI bus limitations -- can
validate this?) and then send all 3 of those to the switch. I
then bond 3 ports from that switch to the next one (we'll probably
have 2 x48 gigabit switches for the whole building -- still
the number of ports/computers required) so as to deal with the
bandwidth.  The cost of some of those fiber <-> copper converts
rather daunting.

I would VERY MUCH prefer to use only 1 server for the entire
-- I am still very much a novice at this and the complexities of
setting up multiple servers or splitting into application &
/home with
LAPD sounds rather daunting.

If your still set on one server also have a look at this
Instead of port trunking I think this would be a better idea.
Especially if you are going to have 2 48 port switches that
could be
on different gigabit linked subnets.
Hmm...I hadn't thought of that particular application
bandwidth bottlenecks--but you're right, that sure would do it!
never even occurred to me...thanks!

I recall reading somewhere that three gigabit cards is probably the
max that the PCI bus
can handle.  Can anyone confirm or deny this?
No. A gigabit card is 1 Gibabit/s (that's 1 billion bits per second).
Each byte is 8 bits. So  it maxs out at 125MB/s. A simple PCI bus can
handle 133MB/s max. So 1 gigabit ethernet card can saturate a PCI bus
PCI 2.2 spec is 32 bits at 66Mhz which equals 266MB/s.  So 2 gigabit
nics should be able to saturate it. The original PCI bus was 32bits at
33Mhz which is 133MB/s.

True, but if your PCI bus is 64-bits at 66MHz (i. e. PCI-X), then you're
fine, as you then have 532MB/s.  I've always been sure to buy 64-bit,
66MHz NIC's for this reason.  Same with RAID cards; PCI-X whenever possible.

What brand of 64-bit NIC are you buying for this purpose?  Where do you get them?


K12OSN mailing list
K12OSN redhat com
For more info see <http://www.k12os.org>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]