lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 28 Apr 2015 22:23:00 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Alexei Starovoitov <ast@...mgrid.com>
Cc:	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Daniel Borkmann <daniel@...earbox.net>,
	Thomas Graf <tgraf@...g.ch>,
	Jamal Hadi Salim <jhs@...atatu.com>,
	John Fastabend <john.r.fastabend@...el.com>,
	netdev@...r.kernel.org
Subject: Re: [PATCH RFC net-next] netif_receive_skb performance

On Tue, 2015-04-28 at 19:11 -0700, Alexei Starovoitov wrote:
> Hi,
> 
> there were many requests for performance numbers in the past, but not
> everyone has access to 10/40G nics and we need a common way to talk
> about RX path performance without overhead of driver RX. That's
> especially important when making changes to netif_receive_skb.

Well, in real life, having to fetch RX descriptor and packet headers are
the main cost, and skb->users == 1.

So its nice trying to optimize netif_receive_skb(), but make sure you
have something that can really exercise same code flows/stalls,
otherwise you'll be tempted by wrong optimizations.

I would for example use a ring buffer, so that each skb you provide to
netif_receive_skb() has cold cache lines (at least skb->head if you want
to mimic build_skb() or napi_get_frags()/napi_reuse_skb() behavior)

Also, this model of flooding one cpu (no irqs, no context switch) mask
latencies caused by code size, since icache is fully populated, with a
very specialized working set.

If we want to pursue this model (like user space (DPDK and alike
frameworks)), we might have to design a very different model than the
IRQ driven one, by dedicating one or multiple cpu threads to run
networking code with no state transition.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ