lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 Jul 2016 15:37:08 +0200
From:	Jesper Dangaard Brouer <brouer@...hat.com>
To:	Jamal Hadi Salim <jhs@...atatu.com>
Cc:	Brenden Blanco <bblanco@...mgrid.com>, davem@...emloft.net,
	netdev@...r.kernel.org, Martin KaFai Lau <kafai@...com>,
	Ari Saha <as754m@....com>,
	Alexei Starovoitov <alexei.starovoitov@...il.com>,
	Or Gerlitz <gerlitz.or@...il.com>, john.fastabend@...il.com,
	hannes@...essinduktion.org, Thomas Graf <tgraf@...g.ch>,
	Tom Herbert <tom@...bertland.com>,
	Daniel Borkmann <daniel@...earbox.net>, brouer@...hat.com
Subject: Re: [PATCH v6 05/12] Add sample for adding simple drop program to
 link

On Mon, 11 Jul 2016 07:09:26 -0400
Jamal Hadi Salim <jhs@...atatu.com> wrote:

> On 16-07-07 10:15 PM, Brenden Blanco wrote:
> > Add a sample program that only drops packets at the BPF_PROG_TYPE_XDP_RX
> > hook of a link. With the drop-only program, observed single core rate is
> > ~20Mpps.
> >
> > Other tests were run, for instance without the dropcnt increment or
> > without reading from the packet header, the packet rate was mostly
> > unchanged.
> >
> > $ perf record -a samples/bpf/xdp1 $(</sys/class/net/eth0/ifindex)
> > proto 17:   20403027 drops/s
> >  
> 
> 
> So - devil's advocate speaking:
> I can filter and drop with this very specific NIC at 10x as fast
> in hardware, correct?

After avoiding the cache-miss, I believe, we have actually reached the
NIC HW limit.  I base this on, my measurements show that the CPU start
to go idle, even enter sleep C-states.  And we exit NAPI mode, not
using the full budget, emptying the RX ring.


> Would a different NIC (pick something like e1000) have served a better
> example?
> BTW: Brenden, now that i looked closer here, you really dont have
> apple-apple comparison with dropping at tc ingress. You have a
> tweaked prefetch and are intentionally running things on a single
> core. Note: We are able to do 20Mpps drops with tc with a single
> core (as shown in netdev11) on a NUC with removing driver overhead.

AFAIK you were using the pktgen "xmit_mode netif_receive" which inject
packets directly into the stack, thus removing the NIC driver from the
equation.  Brenden only is measuring the driver.
  Thus, you are both doing zoom-in-measuring (of a very specific and
limited section of the code) but two completely different pieces of
code.

Notice, Jamal, in your 20Mpps results, your are also avoiding
interacting with the memory allocator, as you are recycling the same
SKB (and don't be confused by seeing kfree_skb() in perf-top as it only
does atomic_dec() [1]).

In this code-zoom-in benchmark (given single CPU is keep 100% busy) you
are actually measuring that the code path (on average) takes 50 nanosec
(1/20*1000) to execute.  Which is cool, but it is only a zoom-in on a
specific code path (which avoids any I-cache misses).

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

[1] https://www.youtube.com/watch?v=M6l1rxZCqLM

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ