[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAL+tcoDgNWBehTrtYhhdu7qBRkNLNH4FJV5T0an0tmLP+yvtqQ@mail.gmail.com>
Date: Tue, 13 Jan 2026 13:33:43 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, bjorn@...nel.org, magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com, jonathan.lemon@...il.com, sdf@...ichev.me,
ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
john.fastabend@...il.com
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next v6 2/2] xsk: move cq_cached_prod_lock to avoid
touching a cacheline in sending path
On Sun, Jan 4, 2026 at 9:21 AM Jason Xing <kerneljasonxing@...il.com> wrote:
>
> From: Jason Xing <kernelxing@...cent.com>
>
> We (Paolo and I) noticed that in the sending path touching an extra
> cacheline due to cq_cached_prod_lock will impact the performance. After
> moving the lock from struct xsk_buff_pool to struct xsk_queue, the
> performance is increased by ~5% which can be observed by xdpsock.
>
> An alternative approach [1] can be using atomic_try_cmpxchg() to have the
> same effect. But unfortunately I don't have evident performance numbers to
> prove the atomic approach is better than the current patch. The advantage
> is to save the contention time among multiple xsks sharing the same pool
> while the disadvantage is losing good maintenance. The full discussion can
> be found at the following link.
>
> [1]: https://lore.kernel.org/all/20251128134601.54678-1-kerneljasonxing@gmail.com/
>
> Suggested-by: Paolo Abeni <pabeni@...hat.com>
> Signed-off-by: Jason Xing <kernelxing@...cent.com>
Hi Magnus, Maciej and Stanislav,
Any feedback on the whole series?
Thanks,
Jason
Powered by blists - more mailing lists