lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoA=fdiB5exzgyueBi7kxHbsCxWKbs0Y5QO4WG3P4-6Aig@mail.gmail.com>
Date: Wed, 13 Aug 2025 07:46:17 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, 
	pabeni@...hat.com, bjorn@...nel.org, magnus.karlsson@...el.com, 
	jonathan.lemon@...il.com, sdf@...ichev.me, ast@...nel.org, 
	daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com, 
	horms@...nel.org, andrew+netdev@...n.ch, bpf@...r.kernel.org, 
	netdev@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next 1/2] xsk: introduce XDP_GENERIC_XMIT_BATCH setsockopt

On Wed, Aug 13, 2025 at 12:40 AM Maciej Fijalkowski
<maciej.fijalkowski@...el.com> wrote:
>
> On Mon, Aug 11, 2025 at 09:12:35PM +0800, Jason Xing wrote:
> > From: Jason Xing <kernelxing@...cent.com>
> >
> > This patch is to prepare for later batch xmit in generic path. Add a new
> > socket option to provide an alternative to achieve a higher overall
> > throughput.
> >
> > skb_batch will be used to store newly allocated skb at one time in the
> > xmit path.
>
> I don't think we need yet another setsockopt. You previously added a knob
> for manipulating max tx budget on generic xmit and that should be enough.
> I think that we should strive for making the batching approach a default
> path in xsk generic xmit.

You're right, it‘s the right direction that we should take. But I
considered this as well before cooking the series and then gave up, my
experiments show that in some real cases (not xdpsock) the batch
process might increase latency. It's a side effect. At that time I
thought many years ago the invention of GRO didn't become the default.

Thanks,
Jason

>
> >
> > Signed-off-by: Jason Xing <kernelxing@...cent.com>
> > ---
> >  Documentation/networking/af_xdp.rst |  9 ++++++++
> >  include/net/xdp_sock.h              |  2 ++
> >  include/uapi/linux/if_xdp.h         |  1 +
> >  net/xdp/xsk.c                       | 32 +++++++++++++++++++++++++++++
> >  tools/include/uapi/linux/if_xdp.h   |  1 +
> >  5 files changed, 45 insertions(+)
> >
> > diff --git a/Documentation/networking/af_xdp.rst b/Documentation/networking/af_xdp.rst
> > index 50d92084a49c..1194bdfaf61e 100644
> > --- a/Documentation/networking/af_xdp.rst
> > +++ b/Documentation/networking/af_xdp.rst
> > @@ -447,6 +447,15 @@ mode to allow application to tune the per-socket maximum iteration for
> >  better throughput and less frequency of send syscall.
> >  Allowed range is [32, xs->tx->nentries].
> >
> > +XDP_GENERIC_XMIT_BATCH
> > +----------------------
> > +
> > +It provides an option that allows application to use batch xmit in the copy
> > +mode. Batch process minimizes the number of grabbing/releasing queue lock
> > +without redundant actions compared to before to gain the overall performance
> > +improvement whereas it might increase the latency of per packet. The maximum
> > +value shouldn't be larger than xs->max_tx_budget.
> > +
> >  XDP_STATISTICS getsockopt
> >  -------------------------
> >
> > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > index ce587a225661..b5a3e37da8db 100644
> > --- a/include/net/xdp_sock.h
> > +++ b/include/net/xdp_sock.h
> > @@ -61,6 +61,7 @@ struct xdp_sock {
> >               XSK_BOUND,
> >               XSK_UNBOUND,
> >       } state;
> > +     struct sk_buff **skb_batch;
> >
> >       struct xsk_queue *tx ____cacheline_aligned_in_smp;
> >       struct list_head tx_list;
> > @@ -70,6 +71,7 @@ struct xdp_sock {
> >        * preventing other XSKs from being starved.
> >        */
> >       u32 tx_budget_spent;
> > +     u32 generic_xmit_batch;
> >
> >       /* Statistics */
> >       u64 rx_dropped;
> > diff --git a/include/uapi/linux/if_xdp.h b/include/uapi/linux/if_xdp.h
> > index 23a062781468..44cb72cd328e 100644
> > --- a/include/uapi/linux/if_xdp.h
> > +++ b/include/uapi/linux/if_xdp.h
> > @@ -80,6 +80,7 @@ struct xdp_mmap_offsets {
> >  #define XDP_STATISTICS                       7
> >  #define XDP_OPTIONS                  8
> >  #define XDP_MAX_TX_SKB_BUDGET                9
> > +#define XDP_GENERIC_XMIT_BATCH               10
> >
> >  struct xdp_umem_reg {
> >       __u64 addr; /* Start of packet data area */
> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > index 9c3acecc14b1..7a149f4ac273 100644
> > --- a/net/xdp/xsk.c
> > +++ b/net/xdp/xsk.c
> > @@ -1122,6 +1122,7 @@ static int xsk_release(struct socket *sock)
> >       xskq_destroy(xs->tx);
> >       xskq_destroy(xs->fq_tmp);
> >       xskq_destroy(xs->cq_tmp);
> > +     kfree(xs->skb_batch);
> >
> >       sock_orphan(sk);
> >       sock->sk = NULL;
> > @@ -1456,6 +1457,37 @@ static int xsk_setsockopt(struct socket *sock, int level, int optname,
> >               WRITE_ONCE(xs->max_tx_budget, budget);
> >               return 0;
> >       }
> > +     case XDP_GENERIC_XMIT_BATCH:
> > +     {
> > +             unsigned int batch, batch_alloc_len;
> > +             struct sk_buff **new;
> > +
> > +             if (optlen != sizeof(batch))
> > +                     return -EINVAL;
> > +             if (copy_from_sockptr(&batch, optval, sizeof(batch)))
> > +                     return -EFAULT;
> > +             if (batch > xs->max_tx_budget)
> > +                     return -EACCES;
> > +
> > +             mutex_lock(&xs->mutex);
> > +             if (!batch) {
> > +                     kfree(xs->skb_batch);
> > +                     xs->generic_xmit_batch = 0;
> > +                     goto out;
> > +             }
> > +             batch_alloc_len = sizeof(struct sk_buff *) * batch;
> > +             new = kmalloc(batch_alloc_len, GFP_KERNEL);
> > +             if (!new)
> > +                     return -ENOMEM;
> > +             if (xs->skb_batch)
> > +                     kfree(xs->skb_batch);
> > +
> > +             xs->skb_batch = new;
> > +             xs->generic_xmit_batch = batch;
> > +out:
> > +             mutex_unlock(&xs->mutex);
> > +             return 0;
> > +     }
> >       default:
> >               break;
> >       }
> > diff --git a/tools/include/uapi/linux/if_xdp.h b/tools/include/uapi/linux/if_xdp.h
> > index 23a062781468..44cb72cd328e 100644
> > --- a/tools/include/uapi/linux/if_xdp.h
> > +++ b/tools/include/uapi/linux/if_xdp.h
> > @@ -80,6 +80,7 @@ struct xdp_mmap_offsets {
> >  #define XDP_STATISTICS                       7
> >  #define XDP_OPTIONS                  8
> >  #define XDP_MAX_TX_SKB_BUDGET                9
> > +#define XDP_GENERIC_XMIT_BATCH               10
> >
> >  struct xdp_umem_reg {
> >       __u64 addr; /* Start of packet data area */
> > --
> > 2.41.3
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ