[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+19QU3AX=9u+x51P0xxPt6sNj-GHUh85NF0gsBChEgvg@mail.gmail.com>
Date: Mon, 4 Mar 2024 10:06:38 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, 
	Richard Gobert <richardbgobert@...il.com>, netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH net-next 2/4] net: gro: change skb_gro_network_header()
On Mon, Mar 4, 2024 at 9:28 AM Paolo Abeni <pabeni@...hat.com> wrote:
>
> On Fri, 2024-03-01 at 19:37 +0000, Eric Dumazet wrote:
> > Change skb_gro_network_header() to accept a const sk_buff
> > and to no longer check if frag0 is NULL or not.
> >
> > This allows to remove skb_gro_frag0_invalidate()
> > which is seen in profiles when header-split is enabled.
>
> I have a few questions to help me understanding this patchset better:
>
> skb_gro_frag0_invalidate() shows in profiles (for non napi_frags_skb
> callers?) because it's called multiple times for each aggregate packet,
> right? I guessed writing the same cacheline multiple times per-se
> should not be too much expansive.
Apparently some (not very recent) intel cpus have issues (at least
with clang generated code) with
immediate reloads after a write.
I also saw some strange artifacts on ARM64 cpus, but it is hard to say,
I found perf to be not very precise on them.
>
> perf here did not allow me to easily observed the mentioned cost,
> because the function is inlined in many different places, I'm wondering
> how you noticed?
It is more about the whole patchset really, this gave me about 4%
improvement on saturated cpu
(RFS enabled, Intel(R) Xeon(R) Gold 6268L CPU @ 2.80GHz)
One TCP flow : (1500 MTU)
New profile (6,233,000 pkts per second )
    19.76%  [kernel]       [k] gq_rx_napi_handler
    11.19%  [kernel]       [k] dev_gro_receive
     8.05%  [kernel]       [k] ipv6_gro_receive
     7.98%  [kernel]       [k] tcp_gro_receive
     7.25%  [kernel]       [k] skb_gro_receive
     5.47%  [kernel]       [k] gq_rx_prep_buffers
     4.39%  [kernel]       [k] skb_release_data
     3.91%  [kernel]       [k] tcp6_gro_receive
     3.55%  [kernel]       [k] csum_ipv6_magic
     3.06%  [kernel]       [k] napi_gro_frags
     2.76%  [kernel]       [k] napi_reuse_skb
Old profile (5,950,000 pkts per second)
    17.92%  [kernel]       [k] gq_rx_napi_handler
    10.22%  [kernel]       [k] dev_gro_receive
     8.60%  [kernel]       [k] tcp_gro_receive
     8.09%  [kernel]       [k] ipv6_gro_receive
     8.06%  [kernel]       [k] skb_gro_receive
     6.74%  [kernel]       [k] gq_rx_prep_buffers
     4.82%  [kernel]       [k] skb_release_data
     3.82%  [kernel]       [k] tcp6_gro_receive
     3.76%  [kernel]       [k] csum_ipv6_magic
     2.97%  [kernel]       [k] napi_gro_frags
     2.57%  [kernel]       [k] napi_reuse_skb
Powered by blists - more mailing lists
 
