[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJ+WXUcXna+s6eVh=-HJf2ExsdLTkXV=CTww9syR2KGVg@mail.gmail.com>
Date: Fri, 5 Apr 2024 14:38:41 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: Jason Xing <kerneljasonxing@...il.com>
Cc: Pavel Begunkov <asml.silence@...il.com>, netdev@...r.kernel.org, davem@...emloft.net,
dsahern@...nel.org, pabeni@...hat.com, kuba@...nel.org
Subject: Re: [PATCH RESEND net-next v3] net: cache for same cpu skb_attempt_defer_free
On Fri, Apr 5, 2024 at 2:29 PM Jason Xing <kerneljasonxing@...il.com> wrote:
>
> Hello Eric,
>
> On Fri, Apr 5, 2024 at 8:18 PM Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > On Fri, Apr 5, 2024 at 1:55 PM Pavel Begunkov <asml.silence@...il.com> wrote:
> > >
> > > On 4/5/24 09:46, Eric Dumazet wrote:
> > > > On Fri, Apr 5, 2024 at 1:38 AM Pavel Begunkov <asml.silence@...il.com> wrote:
> > > >>
> > > >> Optimise skb_attempt_defer_free() when run by the same CPU the skb was
> > > >> allocated on. Instead of __kfree_skb() -> kmem_cache_free() we can
> > > >> disable softirqs and put the buffer into cpu local caches.
> > > >>
> > > >> CPU bound TCP ping pong style benchmarking (i.e. netbench) showed a 1%
> > > >> throughput increase (392.2 -> 396.4 Krps). Cross checking with profiles,
> > > >> the total CPU share of skb_attempt_defer_free() dropped by 0.6%. Note,
> > > >> I'd expect the win doubled with rx only benchmarks, as the optimisation
> > > >> is for the receive path, but the test spends >55% of CPU doing writes.
> > > >>
> > > >> Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
> > > >> ---
> > > >>
> > > >> v3: rebased, no changes otherwise
> > > >>
> > > >> v2: pass @napi_safe=true by using __napi_kfree_skb()
> > > >>
> > > >> net/core/skbuff.c | 15 ++++++++++++++-
> > > >> 1 file changed, 14 insertions(+), 1 deletion(-)
> > > >>
> > > >> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > > >> index 2a5ce6667bbb..c4d36e462a9a 100644
> > > >> --- a/net/core/skbuff.c
> > > >> +++ b/net/core/skbuff.c
> > > >> @@ -6968,6 +6968,19 @@ void __skb_ext_put(struct skb_ext *ext)
> > > >> EXPORT_SYMBOL(__skb_ext_put);
> > > >> #endif /* CONFIG_SKB_EXTENSIONS */
> > > >>
> > > >> +static void kfree_skb_napi_cache(struct sk_buff *skb)
> > > >> +{
> > > >> + /* if SKB is a clone, don't handle this case */
> > > >> + if (skb->fclone != SKB_FCLONE_UNAVAILABLE) {
> > > >> + __kfree_skb(skb);
> > > >> + return;
> > > >> + }
> > > >> +
> > > >> + local_bh_disable();
> > > >> + __napi_kfree_skb(skb, SKB_DROP_REASON_NOT_SPECIFIED);
> > > >
> > > > This needs to be SKB_CONSUMED
> > >
> > > Net folks and yourself were previously strictly insisting that
> > > every patch should do only one thing at a time without introducing
> > > unrelated changes. Considering it replaces __kfree_skb, which
> > > passes SKB_DROP_REASON_NOT_SPECIFIED, that should rather be a
> > > separate patch.
> >
> > OK, I will send a patch myself.
> >
> > __kfree_skb(skb) had no drop reason yet.
>
> Can I ask one question: is it meaningless to add reason in this
> internal function since I observed those callers and noticed that
> there are no important reasons?
There are false positives at this moment whenever frag_list are used in rx skbs.
(Small MAX_SKB_FRAGS, small MTU, big GRO size)
Powered by blists - more mailing lists