[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoDJ00BzKAta6=M0K9JkoT80DVkSDvevGug28b7RrTf6hQ@mail.gmail.com>
Date: Wed, 29 Oct 2025 09:36:47 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
bjorn@...nel.org, magnus.karlsson@...el.com, maciej.fijalkowski@...el.com,
jonathan.lemon@...il.com, sdf@...ichev.me, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com,
horms@...nel.org, andrew+netdev@...n.ch, bpf@...r.kernel.org,
netdev@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next 1/2] xsk: avoid using heavy lock when the pool is
not shared
On Wed, Oct 29, 2025 at 8:29 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Sat, 25 Oct 2025 14:53:09 +0800 Jason Xing wrote:
> > static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool)
> > {
> > + bool lock = !list_is_singular(&pool->xsk_tx_list);
> > unsigned long flags;
> > int ret;
> >
> > - spin_lock_irqsave(&pool->cq_lock, flags);
> > + if (lock)
> > + spin_lock_irqsave(&pool->cq_lock, flags);
> > ret = xskq_prod_reserve(pool->cq);
> > - spin_unlock_irqrestore(&pool->cq_lock, flags);
> > + if (lock)
> > + spin_unlock_irqrestore(&pool->cq_lock, flags);
>
> Please explain in the commit message what guarantees that the list will
> remain singular until the function exits.
Thanks for bringing up a good point that I missed before. I think I
might acquire xsk_tx_list_lock first to make sure xp_add_xsk() will
not interfere with this process. I will figure it out soon.
Thanks,
Jason
> --
> pw-bot: cr
Powered by blists - more mailing lists