lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251128134601.54678-3-kerneljasonxing@gmail.com>
Date: Fri, 28 Nov 2025 21:46:00 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: davem@...emloft.net,
	edumazet@...gle.com,
	kuba@...nel.org,
	pabeni@...hat.com,
	bjorn@...nel.org,
	magnus.karlsson@...el.com,
	maciej.fijalkowski@...el.com,
	jonathan.lemon@...il.com,
	sdf@...ichev.me,
	ast@...nel.org,
	daniel@...earbox.net,
	hawk@...nel.org,
	john.fastabend@...il.com,
	horms@...nel.org,
	andrew+netdev@...n.ch
Cc: bpf@...r.kernel.org,
	netdev@...r.kernel.org,
	Jason Xing <kernelxing@...cent.com>
Subject: [PATCH net-next v3 2/3] xsk: use atomic operations around cached_prod for copy mode

From: Jason Xing <kernelxing@...cent.com>

Use atomic_try_cmpxchg operations to replace spin lock. Technically
CAS (Compare And Swap) is better than a coarse way like spin-lock
especially when we only need to perform a few simple operations.
Similar idea can also be found in the recent commit 100dfa74cad9
("net: dev_queue_xmit() llist adoption") that implements the lockless
logic with the help of try_cmpxchg.

Signed-off-by: Jason Xing <kernelxing@...cent.com>
---
Paolo, sorry that I didn't try to move the lock to struct xsk_queue
because after investigation I reckon try_cmpxchg can add less overhead
when multiple xsks contend at this point. So I hope this approach
can be adopted.
---
 net/xdp/xsk.c       |  4 ++--
 net/xdp/xsk_queue.h | 17 ++++++++++++-----
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index bcfd400e9cf8..b63409b1422e 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -551,7 +551,7 @@ static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool)
 	int ret;
 
 	spin_lock(&pool->cq_cached_prod_lock);
-	ret = xskq_prod_reserve(pool->cq);
+	ret = xsk_cq_cached_prod_reserve(pool->cq);
 	spin_unlock(&pool->cq_cached_prod_lock);
 
 	return ret;
@@ -588,7 +588,7 @@ static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool,
 static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n)
 {
 	spin_lock(&pool->cq_cached_prod_lock);
-	xskq_prod_cancel_n(pool->cq, n);
+	atomic_sub(n, &pool->cq->cached_prod_atomic);
 	spin_unlock(&pool->cq_cached_prod_lock);
 }
 
diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index 44cc01555c0b..7fdc80e624d6 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -402,13 +402,20 @@ static inline void xskq_prod_cancel_n(struct xsk_queue *q, u32 cnt)
 	q->cached_prod -= cnt;
 }
 
-static inline int xskq_prod_reserve(struct xsk_queue *q)
+static inline int xsk_cq_cached_prod_reserve(struct xsk_queue *q)
 {
-	if (xskq_prod_is_full(q))
-		return -ENOSPC;
+	int free_entries;
+	u32 cached_prod;
+
+	do {
+		q->cached_cons = READ_ONCE(q->ring->consumer);
+		cached_prod = atomic_read(&q->cached_prod_atomic);
+		free_entries = q->nentries - (cached_prod - q->cached_cons);
+		if (free_entries <= 0)
+			return -ENOSPC;
+	} while (!atomic_try_cmpxchg(&q->cached_prod_atomic, &cached_prod,
+				     cached_prod + 1));
 
-	/* A, matches D */
-	q->cached_prod++;
 	return 0;
 }
 
-- 
2.41.3


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ