lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 1 Feb 2012 09:39:47 +0000
From:	Ian Campbell <Ian.Campbell@...rix.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH v3 1/6] net: pad skb data and shinfo as a whole rather
 than individually

On Tue, 2012-01-31 at 14:45 +0000, Eric Dumazet wrote:
> Le mardi 31 janvier 2012 à 14:35 +0000, Ian Campbell a écrit :
> > Hi Eric,
> > 
> > On Wed, 2012-01-25 at 13:12 +0000, Eric Dumazet wrote:
> > > Le mercredi 25 janvier 2012 à 13:09 +0000, Ian Campbell a écrit :
> > > 
> > > > Can you elaborate on the specific benchmark you used there?
> > > 
> > > One machine was sending udp frames on my target (using pktgen)
> > > 
> > > Target was running a mono threaded udp receiver (one socket)
> > 
> > I've been playing with pktgen and I'm seeing more like 81,600-81,800 pps
> > from a UDP transmitter, measuring on the rx side using "bwm-ng -u
> > packets" and sinking the traffic with "nc -l -u -p 9 > /dev/null". The
> > numbers are the same with or without this series.
> > 
> > You mentioned numbers in the 820pps region -- is that really kilo-pps
> > (in which case I'm an order of magnitude down) or actually 820pps (in
> > which case I'm somehow a couple of orders of magnitude up).
> > 
> > I'm using a single NIC transmitter, no delay, 1000000 clones of each skb
> > and I've tried 60 and 1500 byte packets. In the 60 byte case I see more
> > like 50k pps
> > 
> > I'm in the process of setting up a receiver with a bnx2 but in the
> > meantime I feel like I'm making some obvious or fundamental flaw in my
> > method...
> > 
> > Any tips greatly appreciated.
> 
> I confirm I reach 820.000 packets per second, on a Gigabit link.

Heh, this is where $LOCALE steps up and confuses things again (commas vs
periods for thousands separators) ;-). However I know what you mean.

> Sender can easily reach line rate (more than 1.000.000 packets per
> second)

Right, this is where I seem to have failed -- 10,000,000 took more like
2 minutes to send than the expected 10s...

> Check how many packet drops you have on receiver ?
> 
> ifconfig eth0
> 
> or "ethtool -S eth0"

After a full 10,000,000 packet run of pktgen I see a difference in
rx_packets of +10,001,561 (there is some other traffic). Various
rx_*_error do not increase and neither does rx_no_buffer_count or
rx_missed_errors (although both of the latter two are non-zero to start
with). ifconfig tells the same story.

I guess this isn't surprising given the send rate since it's not really
stressing the receiver all that much.

I'll investigate the sending side. The sender is running a 2.6.32 distro
kernel. Maybe I need to tweak it somewhat.

Thanks,

Ian.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ