lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 29 Apr 2015 11:37:41 +0200
From:	Daniel Borkmann <daniel@...earbox.net>
To:	Alexei Starovoitov <ast@...mgrid.com>,
	"David S. Miller" <davem@...emloft.net>
CC:	Eric Dumazet <edumazet@...gle.com>, Thomas Graf <tgraf@...g.ch>,
	Jamal Hadi Salim <jhs@...atatu.com>,
	John Fastabend <john.r.fastabend@...el.com>,
	netdev@...r.kernel.org
Subject: Re: [PATCH RFC net-next] netif_receive_skb performance

On 04/29/2015 04:11 AM, Alexei Starovoitov wrote:
...
> It's typical usage:
> $ sudo ./pktgen.sh eth0
> ...
> Result: OK: 232376(c232372+d3) usec, 10000000 (60byte,0frags)
>    43033682pps 20656Mb/sec (20656167360bps) errors: 10000000
...
> My main goal was to benchmark ingress qdisc.
> So here are the numbers:
> raw netif_receive_skb->ip_rcv->kfree_skb - 43 Mpps
> adding ingress qdisc to eth0 drops performance to - 26 Mpps
> adding 'u32 match u32 0 0' drops if further to - 22.4 Mpps
> All as expected.
>
> Now let's remove ingress spin_lock (the goal of John's patches) - 24.5 Mpps
> Note this is single core receive. The boost from removal will be much higher
> on a real nic with multiple cores servicing rx irqs.
>
> With my experimental replacement of ingress_queue/sch_ingress with
> ingress_filter_list and 'u32 match u32 0 0' classifier - 26.2 Mpps
>
> Experimental ingress_filter_list and JITed bpf 'return 0' program - 27.2 Mpps
>
> So there is definitely room for further improvements in ingress
> qdisc beyond dropping spin_lock.

Is the below the case where the conntracker has always a miss and thus
each time needs to create new entries, iow pktgen DoS with random IPs?

> Few other numbers for comparison with dmac == eth0 mac:
> no qdisc, with conntrack and empty iptables - 2.2 Mpps
>     7.65%  kpktgend_0   [nf_conntrack]    [k] nf_conntrack_in
>     7.62%  kpktgend_0   [kernel.vmlinux]  [k] fib_table_lookup
>     5.44%  kpktgend_0   [kernel.vmlinux]  [k] __call_rcu.constprop.63
>     3.71%  kpktgend_0   [kernel.vmlinux]  [k] nf_iterate
>     3.59%  kpktgend_0   [ip_tables]       [k] ipt_do_table
>
> no qdisc, unload conntrack, keep empty iptables - 5.4 Mpps
>    18.17%  kpktgend_0   [kernel.vmlinux]  [k] fib_table_lookup
>     8.31%  kpktgend_0   [kernel.vmlinux]  [k] ip_rcv
>     7.97%  kpktgend_0   [kernel.vmlinux]  [k] __netif_receive_skb_core
>     7.53%  kpktgend_0   [ip_tables]       [k] ipt_do_table
>
> no qdisc, unload conntrack, unload iptables - 6.5 Mpps
>    21.97%  kpktgend_0   [kernel.vmlinux]  [k] fib_table_lookup
>     9.64%  kpktgend_0   [kernel.vmlinux]  [k] __netif_receive_skb_core
>     8.44%  kpktgend_0   [kernel.vmlinux]  [k] ip_rcv
>     7.19%  kpktgend_0   [kernel.vmlinux]  [k] __skb_clone
>     6.89%  kpktgend_0   [kernel.vmlinux]  [k] fib_validate_source
>
> After I'm done with ingress qdisc improvements, I'm planning
> to look at netif_receive_skb itself, since it looks a bit too hot.
...
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ