lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251208110404.qgMKQe77@linutronix.de>
Date: Mon, 8 Dec 2025 12:04:04 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Jesper Dangaard Brouer <hawk@...nel.org>
Cc: Jon Kohler <jon@...anix.com>, Jason Wang <jasowang@...hat.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Willem de Bruijn <willemdebruijn.kernel@...il.com>,
	Andrew Lunn <andrew+netdev@...n.ch>,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	Alexei Starovoitov <ast@...nel.org>,
	Daniel Borkmann <daniel@...earbox.net>,
	John Fastabend <john.fastabend@...il.com>,
	Stanislav Fomichev <sdf@...ichev.me>,
	open list <linux-kernel@...r.kernel.org>, bpf@...r.kernel.org,
	Alexander Lobakin <aleksander.lobakin@...el.com>
Subject: Re: [PATCH net-next v2 5/9] tun: use bulk NAPI cache allocation in
 tun_xdp_one

On 2025-12-05 14:21:51 [+0100], Jesper Dangaard Brouer wrote:
> 
> 
> On 05/12/2025 08.58, Sebastian Andrzej Siewior wrote:
> > On 2025-12-03 15:35:24 [+0000], Jon Kohler wrote:
> > > Thanks, Sebastian - so if I’m reading this correct, it *is* fine to do
> > > the two following patterns, outside of NAPI:
> > > 
> > >     local_bh_disable();
> > >     skb = napi_build_skb(buf, len);
> > >     local_bh_enable();
> > > 
> > >     local_bh_disable();
> > >     napi_consume_skb(skb, 1);
> > >     local_bh_enable();
> > > 
> > > If so, I wonder if it would be cleaner to have something like
> > >     build_skb_bh(buf, len);
> > > 
> > >     consume_skb_bh(skb, 1);
> > > 
> > > Then have those methods handle the local_bh enable/disable, so that
> > > the toggle was a property of a call, not a requirement of the call?
> > 
> > Having budget = 0 would be for non-NAPI users. So passing the 1 is
> > superfluous. You goal seems to be to re-use napi_alloc_cache. Right? And
> > this is better than skb_pool?
> > 
> > There is already napi_alloc_skb() which expects BH to be disabled and
> > netdev_alloc_skb() (and friends) which do disable BH if needed. I don't
> > see an equivalent for non-NAPI users. Haven't checked if any of these
> > could replace your napi_build_skb().
> > 
> > Historically non-NAPI users would be IRQ users and those can't do
> > local_bh_disable(). Therefore there is dev_kfree_skb_irq_reason() for
> > them. You need to delay the free for two reasons.
> > It seems pure software implementations didn't bother so far.
> > 
> > It might make sense to do napi_consume_skb() similar to
> > __netdev_alloc_skb() so that also budget=0 users fill the pool if this
> > is really a benefit.
> 
> I'm not convinced that this "optimization" will be an actual benefit on
> a busy system.  Let me explain the side-effect of local_bh_enable().

I'm arguing that this is the right thing to do, I am just saying that it
will not break anything as far as I am aware.

> Calling local_bh_enable() is adding a re-scheduling opportunity, e.g.
> for processing softirq.  For a benchmark this might not be noticeable as
> this is the main workload.  If there isn't any pending softirq this is
> also not noticeable.  In a more mixed workload (or packet storm) this
> re-scheduling will allow others to "steal" CPU cycles from you.

If there wouldn't be a bh/disable-enable then the context would be
process context and the softirq will be handled immediately.
Now it is "delayed" until the bh-enable.
The only advantage I see here is that the caller participates in
napi_alloc_cache.

> Thus, you might not actually save any cycles via this short BH-disable
> section.  I remember that I was saving around 19ns / 68cycles on a
> 3.6GHz E5-1650 CPU, by using this SKB recycle cache.  The cost of a re-
> scheduling event is like more.

It might expensive because you need to branch out, save/ restore
interrupts and check a few flags. This is something you wouldn't have to
do if you return it back to the memory allocator.

> My advice is to use the napi_* function when already running within a
>  BH-disabled section, as it makes sense to save those cycles
> (essentially reducing the time spend with BH-disabled).  Wrapping these
> napi_* function with BH-disabled just to use them outside NAPI feels
> wrong in so many ways.
> 
> The another reason why these napi_* functions belongs with NAPI is that
> netstack NIC drivers will (almost) always do TX completion first, that
> will free/consume some SKBs, and afterwards do RX processing that need
> to allocate SKBs for the incoming data frames.  Thus, keeping a cache of
> SKBs just released/consumed makes sense.  (p.s. in the past we always
> bulk free'ed all SKBs in the napi cache when exiting NAPI, as they would
> not be cache hot for next round).

Right. That is why I asked if using a skb-pool would be an advantage
since you would have a fix pool of skb for TUN/XDP.

> --Jesper

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ