[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251021131209.41491-9-kerneljasonxing@gmail.com>
Date: Tue, 21 Oct 2025 21:12:08 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: davem@...emloft.net,
edumazet@...gle.com,
kuba@...nel.org,
pabeni@...hat.com,
bjorn@...nel.org,
magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com,
jonathan.lemon@...il.com,
sdf@...ichev.me,
ast@...nel.org,
daniel@...earbox.net,
hawk@...nel.org,
john.fastabend@...il.com,
joe@...a.to,
willemdebruijn.kernel@...il.com
Cc: bpf@...r.kernel.org,
netdev@...r.kernel.org,
Jason Xing <kernelxing@...cent.com>
Subject: [PATCH net-next v3 8/9] xsk: support generic batch xmit in copy mode
From: Jason Xing <kernelxing@...cent.com>
- Move xs->mutex into xsk_generic_xmit to prevent race condition when
application manipulates generic_xmit_batch simultaneously.
- Enable batch xmit eventually.
Make the whole feature work eventually.
Signed-off-by: Jason Xing <kernelxing@...cent.com>
---
net/xdp/xsk.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 1fa099653b7d..3741071c68fd 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -891,8 +891,6 @@ static int __xsk_generic_xmit_batch(struct xdp_sock *xs)
struct sk_buff *skb;
int err = 0;
- mutex_lock(&xs->mutex);
-
/* Since we dropped the RCU read lock, the socket state might have changed. */
if (unlikely(!xsk_is_bound(xs))) {
err = -ENXIO;
@@ -982,21 +980,17 @@ static int __xsk_generic_xmit_batch(struct xdp_sock *xs)
if (sent_frame)
__xsk_tx_release(xs);
- mutex_unlock(&xs->mutex);
return err;
}
-static int __xsk_generic_xmit(struct sock *sk)
+static int __xsk_generic_xmit(struct xdp_sock *xs)
{
- struct xdp_sock *xs = xdp_sk(sk);
bool sent_frame = false;
struct xdp_desc desc;
struct sk_buff *skb;
u32 max_batch;
int err = 0;
- mutex_lock(&xs->mutex);
-
/* Since we dropped the RCU read lock, the socket state might have changed. */
if (unlikely(!xsk_is_bound(xs))) {
err = -ENXIO;
@@ -1071,17 +1065,22 @@ static int __xsk_generic_xmit(struct sock *sk)
if (sent_frame)
__xsk_tx_release(xs);
- mutex_unlock(&xs->mutex);
return err;
}
static int xsk_generic_xmit(struct sock *sk)
{
+ struct xdp_sock *xs = xdp_sk(sk);
int ret;
/* Drop the RCU lock since the SKB path might sleep. */
rcu_read_unlock();
- ret = __xsk_generic_xmit(sk);
+ mutex_lock(&xs->mutex);
+ if (xs->batch.generic_xmit_batch)
+ ret = __xsk_generic_xmit_batch(xs);
+ else
+ ret = __xsk_generic_xmit(xs);
+ mutex_unlock(&xs->mutex);
/* Reaquire RCU lock before going into common code. */
rcu_read_lock();
--
2.41.3
Powered by blists - more mailing lists