lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 20 Aug 2009 14:56:49 +0200
From:	Johannes Stezenbach <js@...21.net>
To:	Jamie Lokier <jamie@...reable.org>
Cc:	linux-embedded@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: 100Mbit ethernet performance on embedded devices

On Wed, Aug 19, 2009 at 04:35:34PM +0100, Jamie Lokier wrote:
> Johannes Stezenbach wrote:
> > 
> >   TCP RX ~70Mbit/sec  (iperf -s on SoC, iperf -c on destop PC)
> >   TCP TX ~56Mbit/sec  (iperf -s on destop PC, iperf -c o SoC)
> > 
> > The CPU load during the iperf test is around
> > 1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
> > 
> > The kernel used in these measurements does not have iptables
> > support, I think packet filtering will slow it down noticably,
> > but I didn't actually try.  The ethernet driver uses NAPI,
> > but it doesn't seem to be a win judging from the irq/sec number.
> 
> You should see far fewer interrupts if NAPI was working properly.
> Rather than NAPI not being a win, it looks like it's not active at
> all.
> 
> 7500/sec is close to the packet rate, for sending TCP with
> full-size ethernet packages over a 100Mbit ethernet link.

>From debug output I can see that NAPI works in principle, however
the timing seems to be such that ->poll() almost always completes
before the next packet is received.  I followed the NAPI_HOWTO.txt
which came with the 2.6.20 kernel.  The delay between irq ->
netif_rx_schedule() -> NET_RX_SOFTIRQ ->  ->poll()  doesn't seem
to be long enough.  But of course my understanding of NAPI is
very limited, probably I missed something...

> > What I'm interested in are some numbers for similar hardware,
> > to find out if my hardware and/or ethernet driver can be improved,
> > or if the CPU will always be the limiting factor.
> 
> I have a SoC with a 166MHz ARMv4 (ARM7TDMI I think, but I'm not sure),
> and an external RTL8139 100Mbit ethernet chip over the SoC's PCI bus.
> 
> It gets a little over 80Mbit/s actual data throughput in both
> directions, running a simple FTP client.

I found one interesting page which defines network driver performance
in terms of "CPU MHz per Mbit".
http://www.stlinux.com/drupal/node/439

I can't really tell from their table how big a win HW csum is, but
what they call "interrupt mitigation optimisations" (IOW: working NAPI)
seems important.  (compare the values for STx7105)

If some has an embedded platform with 100Mbit ethernet where they can switch
HW checksum via ethtool and benchmark both under equal conditions, that
would be very interesting.


Thanks
Johannes
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ