[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fa307736d5448733f08a5a700bc9c647b383a553.camel@gmail.com>
Date: Mon, 09 Jan 2023 14:10:34 -0800
From: Alexander H Duyck <alexander.duyck@...il.com>
To: Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <jbrouer@...hat.com>
Cc: brouer@...hat.com, netdev@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>, edumazet@...gle.com,
pabeni@...hat.com
Subject: Re: [PATCH net-next 2/2] net: kfree_skb_list use
kmem_cache_free_bulk
On Mon, 2023-01-09 at 11:34 -0800, Jakub Kicinski wrote:
> On Mon, 9 Jan 2023 13:24:54 +0100 Jesper Dangaard Brouer wrote:
> > > Also the lack of perf numbers is a bit of a red flag.
> > >
> >
> > I have run performance tests, but as I tried to explain in the
> > cover letter, for the qdisc use-case this code path is only activated
> > when we have overflow at enqueue. Thus, this doesn't translate directly
> > into a performance numbers, as TX-qdisc is 100% full caused by hardware
> > device being backed up, and this patch makes us use less time on freeing
> > memory.
>
> I guess it's quite subjective, so it'd be good to get a third opinion.
> To me that reads like a premature optimization. Saeed asked for perf
> numbers, too.
>
> Does anyone on the list want to cast the tie-break vote?
I'd say there is some value to be gained by this. Basically it means
less overhead for dropping packets if we picked a backed up Tx path.
> > I have been using pktgen script ./pktgen_bench_xmit_mode_queue_xmit.sh
> > which can inject packets at the qdisc layer (invoking __dev_queue_xmit).
> > And then used perf-record to see overhead of SLUB (__slab_free is top#4)
> > is reduced.
>
> Right, pktgen wasting time while still delivering line rate is not of
> practical importance.
I suspect there are probably more real world use cases out there.
Although to test it you would probably have to have a congested network
to really be able to show much of a benefit.
With the pktgen I would be interested in seeing the Qdisc dropped
numbers for with vs without this patch. I would consider something like
that comparable to us doing an XDP_DROP test since all we are talking
about is a synthetic benchmark.
>
> > > kfree_skb_list_bulk() ?
> >
> > Hmm, IMHO not really worth changing the function name. The
> > kfree_skb_list() is called in more places, (than qdisc enqueue-overflow
> > case), which automatically benefits if we keep the function name
> > kfree_skb_list().
>
> To be clear - I was suggesting a simple
> s/kfree_skb_defer_local/kfree_skb_list_bulk/
> on the patch, just renaming the static helper.
>
> IMO now that we have multiple freeing optimizations using "defer" for
> the TCP scheme and "bulk" for your prior slab bulk optimizations would
> improve clarity.
Rather than defer_local would it maybe make more sense to look at
naming it something like "kfree_skb_add_bulk"? Basically we are
building onto the list of buffers to free so I figure something like an
"add" or "append" would make sense.
Powered by blists - more mailing lists