lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2b694ab0d4453df4a19898a01c35ce878e383ce7.camel@redhat.com>
Date: Mon, 04 Mar 2024 11:29:38 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski
 <kuba@...nel.org>,  Richard Gobert <richardbgobert@...il.com>,
 netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH net-next 2/4] net: gro: change skb_gro_network_header()

On Mon, 2024-03-04 at 10:06 +0100, Eric Dumazet wrote:
> On Mon, Mar 4, 2024 at 9:28 AM Paolo Abeni <pabeni@...hat.com> wrote:
> > 
> > On Fri, 2024-03-01 at 19:37 +0000, Eric Dumazet wrote:
> > > Change skb_gro_network_header() to accept a const sk_buff
> > > and to no longer check if frag0 is NULL or not.
> > > 
> > > This allows to remove skb_gro_frag0_invalidate()
> > > which is seen in profiles when header-split is enabled.
> > 
> > I have a few questions to help me understanding this patchset better:
> > 
> > skb_gro_frag0_invalidate() shows in profiles (for non napi_frags_skb
> > callers?) because it's called multiple times for each aggregate packet,
> > right? I guessed writing the same cacheline multiple times per-se
> > should not be too much expansive.
> 
> Apparently some (not very recent) intel cpus have issues (at least
> with clang generated code) with
> immediate reloads after a write.

Ah! I *think* I have observed the same even with gcc (accessing some CB
fields just after the initial zeroing popped-up in perf report. 

> I also saw some strange artifacts on ARM64 cpus, but it is hard to say,
> I found perf to be not very precise on them.
> 
> > 
> > perf here did not allow me to easily observed the mentioned cost,
> > because the function is inlined in many different places, I'm wondering
> > how you noticed?
> 
> It is more about the whole patchset really, this gave me about 4%
> improvement on saturated cpu
> (RFS enabled, Intel(R) Xeon(R) Gold 6268L CPU @ 2.80GHz)
> 
> One TCP flow : (1500 MTU)
> 
> New profile (6,233,000 pkts per second )
>     19.76%  [kernel]       [k] gq_rx_napi_handler
>     11.19%  [kernel]       [k] dev_gro_receive
>      8.05%  [kernel]       [k] ipv6_gro_receive
>      7.98%  [kernel]       [k] tcp_gro_receive
>      7.25%  [kernel]       [k] skb_gro_receive
>      5.47%  [kernel]       [k] gq_rx_prep_buffers
>      4.39%  [kernel]       [k] skb_release_data
>      3.91%  [kernel]       [k] tcp6_gro_receive
>      3.55%  [kernel]       [k] csum_ipv6_magic
>      3.06%  [kernel]       [k] napi_gro_frags
>      2.76%  [kernel]       [k] napi_reuse_skb
> 
> Old profile (5,950,000 pkts per second)
>     17.92%  [kernel]       [k] gq_rx_napi_handler
>     10.22%  [kernel]       [k] dev_gro_receive
>      8.60%  [kernel]       [k] tcp_gro_receive
>      8.09%  [kernel]       [k] ipv6_gro_receive
>      8.06%  [kernel]       [k] skb_gro_receive
>      6.74%  [kernel]       [k] gq_rx_prep_buffers
>      4.82%  [kernel]       [k] skb_release_data
>      3.82%  [kernel]       [k] tcp6_gro_receive
>      3.76%  [kernel]       [k] csum_ipv6_magic
>      2.97%  [kernel]       [k] napi_gro_frags
>      2.57%  [kernel]       [k] napi_reuse_skb

Thanks for the detailed info! I'll try to benchmark this on non
napi_gro_frags enabled driver, but don't hold your breath meanwhile!

Cheers,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ