lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1251529559.30216.141.camel@odie>
Date:	Sat, 29 Aug 2009 09:05:59 +0200
From:	Simon Holm Thøgersen <odie@...aau.dk>
To:	Johannes Stezenbach <js@...21.net>
Cc:	Jamie Lokier <jamie@...reable.org>, linux-embedded@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: 100Mbit ethernet performance on embedded devices

fre, 28 08 2009 kl. 16:41 +0200, skrev Johannes Stezenbach:
> On Thu, Aug 20, 2009 at 02:56:49PM +0200, Johannes Stezenbach wrote:
> > On Wed, Aug 19, 2009 at 04:35:34PM +0100, Jamie Lokier wrote:
> > > Johannes Stezenbach wrote:
> > > > 
> > > >   TCP RX ~70Mbit/sec  (iperf -s on SoC, iperf -c on destop PC)
> > > >   TCP TX ~56Mbit/sec  (iperf -s on destop PC, iperf -c o SoC)
> > > > 
> > > > The CPU load during the iperf test is around
> > > > 1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
> > > > 
> > > > The kernel used in these measurements does not have iptables
> > > > support, I think packet filtering will slow it down noticably,
> > > > but I didn't actually try.  The ethernet driver uses NAPI,
> > > > but it doesn't seem to be a win judging from the irq/sec number.
> > > 
> > > You should see far fewer interrupts if NAPI was working properly.
> > > Rather than NAPI not being a win, it looks like it's not active at
> > > all.
> > > 
> > > 7500/sec is close to the packet rate, for sending TCP with
> > > full-size ethernet packages over a 100Mbit ethernet link.
> > 
> > From debug output I can see that NAPI works in principle, however
> > the timing seems to be such that ->poll() almost always completes
> > before the next packet is received.  I followed the NAPI_HOWTO.txt
> > which came with the 2.6.20 kernel.  The delay between irq ->
> > netif_rx_schedule() -> NET_RX_SOFTIRQ ->  ->poll()  doesn't seem
> > to be long enough.  But of course my understanding of NAPI is
> > very limited, probably I missed something...
> 
> It would've been nice to get a comment on this.  Yeah I know,
> old kernel, non-mainline driver...

Tried porting the driver to mainline? That way you will get more than
two years of improvements to the networking stack including NAPI.

There was a rework of NAPI [1] around 2.6.24, you'd probably like to see
commit bea3348eef27e6044b6161fd04c3152215f96411. You could also ask the
linux driver project to help you make the driver suitable for mainline
inclusion.

[1] http://lwn.net/Articles/244640/


Simon Holm Thøgersen

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ