lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 15 Jan 2016 14:32:12 +0100 From: Hannes Frederic Sowa <hannes@...essinduktion.org> To: Jesper Dangaard Brouer <brouer@...hat.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org> Cc: David Miller <davem@...emloft.net>, Alexander Duyck <alexander.duyck@...il.com>, Alexei Starovoitov <alexei.starovoitov@...il.com>, Daniel Borkmann <borkmann@...earbox.net>, Marek Majkowski <marek@...udflare.com>, Florian Westphal <fw@...len.de>, Paolo Abeni <pabeni@...hat.com>, John Fastabend <john.r.fastabend@...el.com> Subject: Re: Optimizing instruction-cache, more packets at each stage On 15.01.2016 14:22, Jesper Dangaard Brouer wrote: > > Given net-next is closed, we have time to discuss controversial core > changes right? ;-) > > I want to do some instruction-cache level optimizations. > > What do I mean by that... > > The kernel network stack code path (a packet travels) is obviously > larger than the instruction-cache (icache). Today, every packet > travel individually through the network stack, experiencing the exact > same icache misses (as the previous packet). > > I imagine that we could process several packets at each stage in the > packet processing code path. That way making better use of the > icache. > > Today, we already allow NAPI net_rx_action() to process many > (e.g. up-to 64) packets in the driver RX-poll routine. But the driver > then calls the "full" stack for every single packet (e.g. via > napi_gro_receive()) in its processing loop. Thus, trashing the icache > for every packet. > > I have a prove-of-concept patch for ixgbe, which gives me 10% speedup > on full IP forwarding. (This patch also optimize delaying when I > touch the packet data, thus it also optimizes data-cache misses). The > basic idea is that I delay calling ixgbe_rx_skb/napi_gro_receive, and > allow the RX loop (in ixgbe_clean_rx_irq()) to run more iterations > before "flushing" the icache (by calling the stack). > > > This was only at the driver level. I also would like some API towards > the stack. Maybe we could simple pass a skb-list? > > Changing / adjusting the stack to support processing in "stages" might > be more difficult/controversial? I once tried this up till the vlan layer and error handling got so complex and complicated that I stopped there. Maybe it is possible in some separate stages. This needs redesign of a lot of stuff and while doing so I would switch from a more stack based approach to build the stack to try out a more iterative one (see e.g. stack space consumption problems). Just my 2 cents, Hannes
Powered by blists - more mailing lists