[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aQi/ZbAoVwBX9VCi@boxer>
Date: Mon, 3 Nov 2025 15:42:45 +0100
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To: Jason Xing <kerneljasonxing@...il.com>
CC: <davem@...emloft.net>, <edumazet@...gle.com>, <kuba@...nel.org>,
<pabeni@...hat.com>, <bjorn@...nel.org>, <magnus.karlsson@...el.com>,
<jonathan.lemon@...il.com>, <sdf@...ichev.me>, <ast@...nel.org>,
<daniel@...earbox.net>, <hawk@...nel.org>, <john.fastabend@...il.com>,
<horms@...nel.org>, <andrew+netdev@...n.ch>, <bpf@...r.kernel.org>,
<netdev@...r.kernel.org>, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next v2 1/2] xsk: do not enable/disable irq when
grabbing/releasing xsk_tx_list_lock
On Thu, Oct 30, 2025 at 08:06:45AM +0800, Jason Xing wrote:
> From: Jason Xing <kernelxing@...cent.com>
>
> The commit ac98d8aab61b ("xsk: wire upp Tx zero-copy functions")
> originally introducing this lock put the deletion process in the
> sk_destruct which can run in irq context obviously, so the
> xxx_irqsave()/xxx_irqrestore() pair was used. But later another
> commit 541d7fdd7694 ("xsk: proper AF_XDP socket teardown ordering")
> moved the deletion into xsk_release() that only happens in process
> context. It means that since this commit, it doesn't necessarily
> need that pair.
>
> Now, there are two places that use this xsk_tx_list_lock and only
> run in the process context. So avoid manipulating the irq then.
>
> Signed-off-by: Jason Xing <kernelxing@...cent.com>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> ---
> net/xdp/xsk_buff_pool.c | 12 ++++--------
> 1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> index aa9788f20d0d..309075050b2a 100644
> --- a/net/xdp/xsk_buff_pool.c
> +++ b/net/xdp/xsk_buff_pool.c
> @@ -12,26 +12,22 @@
>
> void xp_add_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs)
> {
> - unsigned long flags;
> -
> if (!xs->tx)
> return;
>
> - spin_lock_irqsave(&pool->xsk_tx_list_lock, flags);
> + spin_lock(&pool->xsk_tx_list_lock);
> list_add_rcu(&xs->tx_list, &pool->xsk_tx_list);
> - spin_unlock_irqrestore(&pool->xsk_tx_list_lock, flags);
> + spin_unlock(&pool->xsk_tx_list_lock);
> }
>
> void xp_del_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs)
> {
> - unsigned long flags;
> -
> if (!xs->tx)
> return;
>
> - spin_lock_irqsave(&pool->xsk_tx_list_lock, flags);
> + spin_lock(&pool->xsk_tx_list_lock);
> list_del_rcu(&xs->tx_list);
> - spin_unlock_irqrestore(&pool->xsk_tx_list_lock, flags);
> + spin_unlock(&pool->xsk_tx_list_lock);
> }
>
> void xp_destroy(struct xsk_buff_pool *pool)
> --
> 2.41.3
>
Powered by blists - more mailing lists