[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080110163626.GJ3544@solarflare.com>
Date: Thu, 10 Jan 2008 16:36:27 +0000
From: Ben Hutchings <bhutchings@...arflare.com>
To: Breno Leitao <leitao@...ux.vnet.ibm.com>
Cc: netdev@...r.kernel.org
Subject: Re: e1000 performance issue in 4 simultaneous links
Breno Leitao wrote:
> Hello,
>
> I've perceived that there is a performance issue when running netperf
> against 4 e1000 links connected end-to-end to another machine with 4
> e1000 interfaces.
>
> I have 2 4-port interfaces on my machine, but the test is just
> considering 2 port for each interfaces card.
>
> When I run netperf in just one interface, I get 940.95 * 10^6 bits/sec
> of transfer rate. If I run 4 netperf against 4 different interfaces, I
> get around 720 * 10^6 bits/sec.
<snip>
I take it that's the average for individual interfaces, not the
aggregate? RX processing for multi-gigabits per second can be quite
expensive. This can be mitigated by interrupt moderation and NAPI
polling, jumbo frames (MTU >1500) and/or Large Receive Offload (LRO).
I don't think e1000 hardware does LRO, but the driver could presumably
be changed use Linux's software LRO.
Even with these optimisations, if all RX processing is done on a
single CPU this can become a bottleneck. Does the test system have
multiple CPUs? Are IRQs for the multiple NICs balanced across
multiple CPUs?
Ben.
--
Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists