lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250825135342.53110-4-kerneljasonxing@gmail.com>
Date: Mon, 25 Aug 2025 21:53:36 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: davem@...emloft.net,
	edumazet@...gle.com,
	kuba@...nel.org,
	pabeni@...hat.com,
	bjorn@...nel.org,
	magnus.karlsson@...el.com,
	maciej.fijalkowski@...el.com,
	jonathan.lemon@...il.com,
	sdf@...ichev.me,
	ast@...nel.org,
	daniel@...earbox.net,
	hawk@...nel.org,
	john.fastabend@...il.com,
	horms@...nel.org,
	andrew+netdev@...n.ch
Cc: bpf@...r.kernel.org,
	netdev@...r.kernel.org,
	Jason Xing <kernelxing@...cent.com>
Subject: [PATCH net-next v2 3/9] xsk: introduce locked version of xskq_prod_write_addr_batch

From: Jason Xing <kernelxing@...cent.com>

Add xskq_prod_write_addr_batch_locked() helper for batch xmit.

xskq_prod_write_addr_batch() is used in the napi poll env which is
already in the softirq so it doesn't need any lock protection. Later
this function will be used in the generic xmit path that is non irq,
so the locked version as this patch adds is needed.

Also add nb_pkts in xskq_prod_write_addr_batch() to count how many
skbs instead of descs will be used in the batch xmit at one time, so
that main batch xmit function can decide how many skbs will be
allocated. Note that xskq_prod_write_addr_batch() was designed to
help zerocopy mode because it only cares about descriptors/data itself.

Signed-off-by: Jason Xing <kernelxing@...cent.com>
---
 net/xdp/xsk_queue.h | 26 +++++++++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index 47741b4c285d..c444a1e29838 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -389,17 +389,37 @@ static inline int xskq_prod_reserve_addr(struct xsk_queue *q, u64 addr)
 	return 0;
 }
 
-static inline void xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
-					      u32 nb_entries)
+static inline u32 xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
+					     u32 nb_entries)
 {
 	struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;
 	u32 i, cached_prod;
+	u32 nb_pkts = 0;
 
 	/* A, matches D */
 	cached_prod = q->cached_prod;
-	for (i = 0; i < nb_entries; i++)
+	for (i = 0; i < nb_entries; i++) {
 		ring->desc[cached_prod++ & q->ring_mask] = descs[i].addr;
+		if (!xp_mb_desc(&descs[i]))
+			nb_pkts++;
+	}
 	q->cached_prod = cached_prod;
+
+	return nb_pkts;
+}
+
+static inline u32
+xskq_prod_write_addr_batch_locked(struct xsk_buff_pool *pool,
+				  struct xdp_desc *descs, u32 nb_entries)
+{
+	unsigned long flags;
+	u32 nb_pkts;
+
+	spin_lock_irqsave(&pool->cq_lock, flags);
+	nb_pkts = xskq_prod_write_addr_batch(pool->cq, descs, nb_entries);
+	spin_unlock_irqrestore(&pool->cq_lock, flags);
+
+	return nb_pkts;
 }
 
 static inline int xskq_prod_reserve_desc(struct xsk_queue *q,
-- 
2.41.3


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ