[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAL+tcoDssz0zPY3iKS5Zv6C0zq1ChbTZhRbwTPRq_6F0U6Jc8A@mail.gmail.com>
Date: Sat, 25 Oct 2025 17:28:00 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Stanislav Fomichev <stfomichev@...il.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, bjorn@...nel.org, magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com, jonathan.lemon@...il.com, sdf@...ichev.me,
ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
john.fastabend@...il.com, joe@...a.to, willemdebruijn.kernel@...il.com,
bpf@...r.kernel.org, netdev@...r.kernel.org,
Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next v3 8/9] xsk: support generic batch xmit in copy mode
On Sat, Oct 25, 2025 at 2:52 AM Stanislav Fomichev <stfomichev@...il.com> wrote:
>
> On 10/21, Jason Xing wrote:
> > From: Jason Xing <kernelxing@...cent.com>
> >
> > - Move xs->mutex into xsk_generic_xmit to prevent race condition when
> > application manipulates generic_xmit_batch simultaneously.
> > - Enable batch xmit eventually.
> >
> > Make the whole feature work eventually.
> >
> > Signed-off-by: Jason Xing <kernelxing@...cent.com>
> > ---
> > net/xdp/xsk.c | 17 ++++++++---------
> > 1 file changed, 8 insertions(+), 9 deletions(-)
> >
> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > index 1fa099653b7d..3741071c68fd 100644
> > --- a/net/xdp/xsk.c
> > +++ b/net/xdp/xsk.c
> > @@ -891,8 +891,6 @@ static int __xsk_generic_xmit_batch(struct xdp_sock *xs)
> > struct sk_buff *skb;
> > int err = 0;
> >
> > - mutex_lock(&xs->mutex);
> > -
> > /* Since we dropped the RCU read lock, the socket state might have changed. */
> > if (unlikely(!xsk_is_bound(xs))) {
> > err = -ENXIO;
> > @@ -982,21 +980,17 @@ static int __xsk_generic_xmit_batch(struct xdp_sock *xs)
> > if (sent_frame)
> > __xsk_tx_release(xs);
> >
> > - mutex_unlock(&xs->mutex);
> > return err;
> > }
> >
> > -static int __xsk_generic_xmit(struct sock *sk)
> > +static int __xsk_generic_xmit(struct xdp_sock *xs)
> > {
> > - struct xdp_sock *xs = xdp_sk(sk);
> > bool sent_frame = false;
> > struct xdp_desc desc;
> > struct sk_buff *skb;
> > u32 max_batch;
> > int err = 0;
> >
> > - mutex_lock(&xs->mutex);
> > -
> > /* Since we dropped the RCU read lock, the socket state might have changed. */
> > if (unlikely(!xsk_is_bound(xs))) {
> > err = -ENXIO;
> > @@ -1071,17 +1065,22 @@ static int __xsk_generic_xmit(struct sock *sk)
> > if (sent_frame)
> > __xsk_tx_release(xs);
> >
> > - mutex_unlock(&xs->mutex);
> > return err;
> > }
> >
> > static int xsk_generic_xmit(struct sock *sk)
> > {
> > + struct xdp_sock *xs = xdp_sk(sk);
> > int ret;
> >
> > /* Drop the RCU lock since the SKB path might sleep. */
> > rcu_read_unlock();
> > - ret = __xsk_generic_xmit(sk);
> > + mutex_lock(&xs->mutex);
> > + if (xs->batch.generic_xmit_batch)
> > + ret = __xsk_generic_xmit_batch(xs);
> > + else
> > + ret = __xsk_generic_xmit(xs);
>
> What's the point of keeping __xsk_generic_xmit? Can we have batch=1 by
> default and always use __xsk_generic_xmit_batch?
Spot on. Thanks. Then I can fully replace it with the new function.
Thanks,
Jason
Powered by blists - more mailing lists