[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251209031628.28429-1-kerneljasonxing@gmail.com>
Date: Tue, 9 Dec 2025 11:16:28 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: davem@...emloft.net,
edumazet@...gle.com,
kuba@...nel.org,
pabeni@...hat.com,
bjorn@...nel.org,
magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com,
jonathan.lemon@...il.com,
sdf@...ichev.me,
ast@...nel.org,
daniel@...earbox.net,
hawk@...nel.org,
john.fastabend@...il.com
Cc: bpf@...r.kernel.org,
netdev@...r.kernel.org,
Jason Xing <kernelxing@...cent.com>
Subject: [PATCH RFC net-next v4] xsk: move cq_cached_prod_lock to avoid touching a cacheline in sending path
From: Jason Xing <kernelxing@...cent.com>
We (Paolo and I) noticed that in the sending path touching an extra
cacheline due to cq_cached_prod_lock will impact the performance. After
moving the lock from struct xsk_buff_pool to struct xsk_queue, the
performance is increased by ~5% which can be observed by xdpsock.
An alternative approach [1] can be using atomic_try_cmpxchg() to have the
same effect. But unfortunately I don't have evident performance number to
prove the atomic approach is better than the current patch. The advantage
is to save the contention time among multiple xsks sharing the same pool
while the disadvantage is lose good maintenance. The full discussion can
be found at the following link.
[1]: https://lore.kernel.org/all/20251128134601.54678-1-kerneljasonxing@gmail.com/
Suggested-by: Paolo Abeni <pabeni@...hat.com>
Signed-off-by: Jason Xing <kernelxing@...cent.com>
---
Q: since net-next will be open next year, I wonder if I should post this
patch targetting bpf-next?
RFC V4
Link: https://lore.kernel.org/all/20251128134601.54678-1-kerneljasonxing@gmail.com/
1. use moving lock method instead (Paolo, Magnus)
2. Add credit to Paolo, thanks!
v3
Link: https://lore.kernel.org/all/20251125085431.4039-1-kerneljasonxing@gmail.com/
1. fix one race issue that cannot be resolved by simple seperated atomic
operations. So this revision only updates patch [2/3] and tries to use
try_cmpxchg method to avoid that problem. (paolo)
2. update commit log accordingly.
V2
Link: https://lore.kernel.org/all/20251124080858.89593-1-kerneljasonxing@gmail.com/
1. use separate functions rather than branches within shared routines. (Maciej)
2. make each patch as simple as possible for easier review
---
include/net/xsk_buff_pool.h | 5 -----
net/xdp/xsk.c | 8 ++++----
net/xdp/xsk_buff_pool.c | 2 +-
net/xdp/xsk_queue.h | 5 +++++
4 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
index 92a2358c6ce3..0b1abdb99c9e 100644
--- a/include/net/xsk_buff_pool.h
+++ b/include/net/xsk_buff_pool.h
@@ -90,11 +90,6 @@ struct xsk_buff_pool {
* destructor callback.
*/
spinlock_t cq_prod_lock;
- /* Mutual exclusion of the completion ring in the SKB mode.
- * Protect: when sockets share a single cq when the same netdev
- * and queue id is shared.
- */
- spinlock_t cq_cached_prod_lock;
struct xdp_buff_xsk *free_heads[];
};
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index f093c3453f64..7613887b4122 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -543,9 +543,9 @@ static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool)
{
int ret;
- spin_lock(&pool->cq_cached_prod_lock);
+ spin_lock(&pool->cq->cq_cached_prod_lock);
ret = xskq_prod_reserve(pool->cq);
- spin_unlock(&pool->cq_cached_prod_lock);
+ spin_unlock(&pool->cq->cq_cached_prod_lock);
return ret;
}
@@ -619,9 +619,9 @@ static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool,
static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n)
{
- spin_lock(&pool->cq_cached_prod_lock);
+ spin_lock(&pool->cq->cq_cached_prod_lock);
xskq_prod_cancel_n(pool->cq, n);
- spin_unlock(&pool->cq_cached_prod_lock);
+ spin_unlock(&pool->cq->cq_cached_prod_lock);
}
INDIRECT_CALLABLE_SCOPE
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index 51526034c42a..127c17e02384 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -91,7 +91,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
INIT_LIST_HEAD(&pool->xsk_tx_list);
spin_lock_init(&pool->xsk_tx_list_lock);
spin_lock_init(&pool->cq_prod_lock);
- spin_lock_init(&pool->cq_cached_prod_lock);
+ spin_lock_init(&xs->cq_tmp->cq_cached_prod_lock);
refcount_set(&pool->users, 1);
pool->fq = xs->fq_tmp;
diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index 1eb8d9f8b104..ec08d9c102b1 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -46,6 +46,11 @@ struct xsk_queue {
u64 invalid_descs;
u64 queue_empty_descs;
size_t ring_vmalloc_size;
+ /* Mutual exclusion of the completion ring in the SKB mode.
+ * Protect: when sockets share a single cq when the same netdev
+ * and queue id is shared.
+ */
+ spinlock_t cq_cached_prod_lock;
};
struct parsed_desc {
--
2.41.3
Powered by blists - more mailing lists