[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aPvK0pFuBpplxbXX@mini-arch>
Date: Fri, 24 Oct 2025 11:52:02 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: Jason Xing <kerneljasonxing@...il.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, bjorn@...nel.org, magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com, jonathan.lemon@...il.com,
sdf@...ichev.me, ast@...nel.org, daniel@...earbox.net,
hawk@...nel.org, john.fastabend@...il.com, joe@...a.to,
willemdebruijn.kernel@...il.com, bpf@...r.kernel.org,
netdev@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next v3 8/9] xsk: support generic batch xmit in copy
mode
On 10/21, Jason Xing wrote:
> From: Jason Xing <kernelxing@...cent.com>
>
> - Move xs->mutex into xsk_generic_xmit to prevent race condition when
> application manipulates generic_xmit_batch simultaneously.
> - Enable batch xmit eventually.
>
> Make the whole feature work eventually.
>
> Signed-off-by: Jason Xing <kernelxing@...cent.com>
> ---
> net/xdp/xsk.c | 17 ++++++++---------
> 1 file changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 1fa099653b7d..3741071c68fd 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -891,8 +891,6 @@ static int __xsk_generic_xmit_batch(struct xdp_sock *xs)
> struct sk_buff *skb;
> int err = 0;
>
> - mutex_lock(&xs->mutex);
> -
> /* Since we dropped the RCU read lock, the socket state might have changed. */
> if (unlikely(!xsk_is_bound(xs))) {
> err = -ENXIO;
> @@ -982,21 +980,17 @@ static int __xsk_generic_xmit_batch(struct xdp_sock *xs)
> if (sent_frame)
> __xsk_tx_release(xs);
>
> - mutex_unlock(&xs->mutex);
> return err;
> }
>
> -static int __xsk_generic_xmit(struct sock *sk)
> +static int __xsk_generic_xmit(struct xdp_sock *xs)
> {
> - struct xdp_sock *xs = xdp_sk(sk);
> bool sent_frame = false;
> struct xdp_desc desc;
> struct sk_buff *skb;
> u32 max_batch;
> int err = 0;
>
> - mutex_lock(&xs->mutex);
> -
> /* Since we dropped the RCU read lock, the socket state might have changed. */
> if (unlikely(!xsk_is_bound(xs))) {
> err = -ENXIO;
> @@ -1071,17 +1065,22 @@ static int __xsk_generic_xmit(struct sock *sk)
> if (sent_frame)
> __xsk_tx_release(xs);
>
> - mutex_unlock(&xs->mutex);
> return err;
> }
>
> static int xsk_generic_xmit(struct sock *sk)
> {
> + struct xdp_sock *xs = xdp_sk(sk);
> int ret;
>
> /* Drop the RCU lock since the SKB path might sleep. */
> rcu_read_unlock();
> - ret = __xsk_generic_xmit(sk);
> + mutex_lock(&xs->mutex);
> + if (xs->batch.generic_xmit_batch)
> + ret = __xsk_generic_xmit_batch(xs);
> + else
> + ret = __xsk_generic_xmit(xs);
What's the point of keeping __xsk_generic_xmit? Can we have batch=1 by
default and always use __xsk_generic_xmit_batch?
Powered by blists - more mailing lists