lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Thu, 28 Jan 2016 13:05:03 +0100
From:	"Jason A. Donenfeld" <Jason@...c4.com>
To:	Netdev <netdev@...r.kernel.org>
Subject: Interesting performance characteristics of serial packet submission

Hello folks,

I've observed a very interesting performance characteristic. Sometimes
net_device drivers need to "do something to a packet" and then "send
it through a tunnel". This looks like:

    expensive_transformation(skb1);
    udp_tunnel_xmit_skb(skb1);
    expensive_transformation(skb2);
    udp_tunnel_xmit_skb(skb2);
    expensive_transformation(skb3);
    udp_tunnel_xmit_skb(skb3);
    expensive_transformation(skb4);
    udp_tunnel_xmit_skb(skb4);
    expensive_transformation(skb5);
    udp_tunnel_xmit_skb(skb5);

It turns out, however, that we gain significant performance increases
(300mbps on my laptop) by doing all the xmits in a row, like this:

    expensive_transformation(skb1);
    expensive_transformation(skb2);
    expensive_transformation(skb3);
    expensive_transformation(skb4);
    expensive_transformation(skb5);
    udp_tunnel_xmit_skb(skb1);
    udp_tunnel_xmit_skb(skb2);
    udp_tunnel_xmit_skb(skb3);
    udp_tunnel_xmit_skb(skb4);
    udp_tunnel_xmit_skb(skb5);

Now practically speaking, it's not that hard to implement the latter
more performant variant, as devices can simply opt in to receiving GSO
super packets, and submit them all together in bunches of GSO-split
packets. Implementation is not an issue.

But this does leave me wondering why the performance is better this way.

Possible ideas include something along the lines of NAPI polling for
tx buffers at intervals, and the more full the buffer is, the better,
as it means it won't need to do another poll examination later. But
this is just a theory, and I really have no idea. I was wondering if
anybody reads this message and thinks, "oh, duh, it's of course
because of XYZ. You should always do ABC, and you can improve things
further if you 123 too." I'd be quite interested anyhow.

Thanks,
Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ