[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWgDK4Zq7NShgql5@mini-arch>
Date: Wed, 14 Jan 2026 12:57:15 -0800
From: Stanislav Fomichev <stfomichev@...il.com>
To: Jason Xing <kerneljasonxing@...il.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, bjorn@...nel.org, magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com, jonathan.lemon@...il.com,
sdf@...ichev.me, ast@...nel.org, daniel@...earbox.net,
hawk@...nel.org, john.fastabend@...il.com, bpf@...r.kernel.org,
netdev@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next v6 2/2] xsk: move cq_cached_prod_lock to avoid
touching a cacheline in sending path
On 01/13, Jason Xing wrote:
> On Sun, Jan 4, 2026 at 9:21 AM Jason Xing <kerneljasonxing@...il.com> wrote:
> >
> > From: Jason Xing <kernelxing@...cent.com>
> >
> > We (Paolo and I) noticed that in the sending path touching an extra
> > cacheline due to cq_cached_prod_lock will impact the performance. After
> > moving the lock from struct xsk_buff_pool to struct xsk_queue, the
> > performance is increased by ~5% which can be observed by xdpsock.
> >
> > An alternative approach [1] can be using atomic_try_cmpxchg() to have the
> > same effect. But unfortunately I don't have evident performance numbers to
> > prove the atomic approach is better than the current patch. The advantage
> > is to save the contention time among multiple xsks sharing the same pool
> > while the disadvantage is losing good maintenance. The full discussion can
> > be found at the following link.
> >
> > [1]: https://lore.kernel.org/all/20251128134601.54678-1-kerneljasonxing@gmail.com/
> >
> > Suggested-by: Paolo Abeni <pabeni@...hat.com>
> > Signed-off-by: Jason Xing <kernelxing@...cent.com>
>
> Hi Magnus, Maciej and Stanislav,
>
> Any feedback on the whole series?
LGTM, thanks! (I'm gonna be a bit slow on the mailing list in Jan/Feb)
Powered by blists - more mailing lists