lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAL+tcoBwKd9v6A8j_6wgN7y8Y-_4N6VM-Pdnv4x49eUx5RcGag@mail.gmail.com>
Date: Thu, 30 Oct 2025 07:43:53 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, 
	pabeni@...hat.com, bjorn@...nel.org, magnus.karlsson@...el.com, 
	jonathan.lemon@...il.com, sdf@...ichev.me, ast@...nel.org, 
	daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com, 
	horms@...nel.org, andrew+netdev@...n.ch, bpf@...r.kernel.org, 
	netdev@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next 1/2] xsk: avoid using heavy lock when the pool is
 not shared

On Wed, Oct 29, 2025 at 11:48 PM Maciej Fijalkowski
<maciej.fijalkowski@...el.com> wrote:
>
> On Sat, Oct 25, 2025 at 02:53:09PM +0800, Jason Xing wrote:
> > From: Jason Xing <kernelxing@...cent.com>
> >
> > The commit f09ced4053bc ("xsk: Fix race in SKB mode transmit with
> > shared cq") uses a heavy lock (spin_lock_irqsave) for the shared
> > pool scenario which is that multiple sockets share the same pool.
> >
> > It does harm to the case where the pool is only owned by one xsk.
> > The patch distinguishes those two cases through checking if the xsk
> > list only has one xsk. If so, that means the pool is exclusive and
> > we don't need to hold the lock and disable IRQ at all. The benefit
> > of this is to avoid those two operations being executed extremely
> > frequently.
>
> Even with a single CQ producer we need to have related code within
> critical section. One core can be in process context via sendmsg() and
> for some reason xmit failed and driver consumed skb (destructor called).
>
> Other core can be at same time calling the destructor on different skb
> that has been successfully xmitted, doing the Tx completion via driver's
> NAPI. This means that without locking the SPSC concept would be violated.
>
> So I'm afraid I have to nack this.

But that will not happen around cq->cached_prod. All the possible
places where cached_prod is modified are in the process context. I've
already pointed out the different subtle cases in patch [2/2].

SPSC is all about the global state of producer and consumer that can
affect both layers instead of local or cached ones. So that's why we
can apply a lockless policy in this patch when the pool is exclusive
and why we can use a smaller lock as patch [2/2] shows.

As to how to prevent the case like Jakub mentioned, so far I cannot
find a good solution unless introducing a new option that limits one
xsk binding to only one unique pool. But probably it's not worth it.
It's the reason why I will scrap this patch in V2.

Thanks,
Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ