[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoBv1sm+zQURFwS2E=pMdjWEMUUVBH9cLjBadZms+FvHNw@mail.gmail.com>
Date: Tue, 26 Aug 2025 08:26:41 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, bjorn@...nel.org, magnus.karlsson@...el.com,
jonathan.lemon@...il.com, sdf@...ichev.me, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com,
horms@...nel.org, andrew+netdev@...n.ch, bpf@...r.kernel.org,
netdev@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next v2 4/9] xsk: extend xsk_build_skb() to support
passing an already allocated skb
On Tue, Aug 26, 2025 at 5:49 AM Maciej Fijalkowski
<maciej.fijalkowski@...el.com> wrote:
>
> On Mon, Aug 25, 2025 at 09:53:37PM +0800, Jason Xing wrote:
> > From: Jason Xing <kernelxing@...cent.com>
> >
> > Batch xmit mode needs to allocate and build skbs at one time. To avoid
> > reinvent the wheel, use xsk_build_skb() as the second half process of
> > the whole initialization of each skb.
> >
> > The original xsk_build_skb() itself allocates a new skb by calling
> > sock_alloc_send_skb whether in copy mode or zerocopy mode. Add a new
> > parameter allocated skb to let other callers to pass an already
> > allocated skb to support later xmit batch feature. At that time,
> > another building skb function will generate a new skb and pass it to
> > xsk_build_skb() to finish the rest of building process, like
> > initializing structures and copying data.
>
> are you saying you were able to avoid sock_alloc_send_skb() calls for
> batching approach and your socket memory accounting problems disappeared?
IIUC, memory accounting is needed because it keeps safe for xsk [1].
The above description says I reused part of xsk_build_skb() in the
batching process.
[1]: https://lore.kernel.org/all/CAL+tcoBvLHFJJuYawJc3wY2aOrn5CQ3s5+sbC2M24_QNLyBHsg@mail.gmail.com/
>
> >
> > Signed-off-by: Jason Xing <kernelxing@...cent.com>
> > ---
> > include/net/xdp_sock.h | 4 ++++
> > net/xdp/xsk.c | 23 ++++++++++++++++-------
> > 2 files changed, 20 insertions(+), 7 deletions(-)
> >
> > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > index c2b05268b8ad..cbba880c27c3 100644
> > --- a/include/net/xdp_sock.h
> > +++ b/include/net/xdp_sock.h
> > @@ -123,6 +123,10 @@ struct xsk_tx_metadata_ops {
> > void (*tmo_request_launch_time)(u64 launch_time, void *priv);
> > };
> >
> > +
> > +struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
> > + struct sk_buff *allocated_skb,
> > + struct xdp_desc *desc);
>
> why do you export this?
Because patch 5 needs this in xsk_alloc_batch_skb().
Thanks,
Jason
>
> > #ifdef CONFIG_XDP_SOCKETS
> >
> > int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp);
> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > index 173ad49379c3..213d6100e405 100644
> > --- a/net/xdp/xsk.c
> > +++ b/net/xdp/xsk.c
> > @@ -605,6 +605,7 @@ static void xsk_drop_skb(struct sk_buff *skb)
> > }
> >
> > static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
> > + struct sk_buff *allocated_skb,
> > struct xdp_desc *desc)
> > {
> > struct xsk_buff_pool *pool = xs->pool;
> > @@ -618,7 +619,10 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
> > if (!skb) {
> > hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(xs->dev->needed_headroom));
> >
> > - skb = sock_alloc_send_skb(&xs->sk, hr, 1, &err);
> > + if (!allocated_skb)
> > + skb = sock_alloc_send_skb(&xs->sk, hr, 1, &err);
> > + else
> > + skb = allocated_skb;
> > if (unlikely(!skb))
> > return ERR_PTR(err);
> >
> > @@ -657,8 +661,9 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
> > return skb;
> > }
> >
> > -static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
> > - struct xdp_desc *desc)
> > +struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
> > + struct sk_buff *allocated_skb,
> > + struct xdp_desc *desc)
> > {
> > struct xsk_tx_metadata *meta = NULL;
> > struct net_device *dev = xs->dev;
> > @@ -667,7 +672,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
> > int err;
> >
> > if (dev->priv_flags & IFF_TX_SKB_NO_LINEAR) {
> > - skb = xsk_build_skb_zerocopy(xs, desc);
> > + skb = xsk_build_skb_zerocopy(xs, allocated_skb, desc);
> > if (IS_ERR(skb)) {
> > err = PTR_ERR(skb);
> > goto free_err;
> > @@ -683,8 +688,12 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
> > first_frag = true;
> >
> > hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom));
> > - tr = dev->needed_tailroom;
> > - skb = sock_alloc_send_skb(&xs->sk, hr + len + tr, 1, &err);
> > + if (!allocated_skb) {
> > + tr = dev->needed_tailroom;
> > + skb = sock_alloc_send_skb(&xs->sk, hr + len + tr, 1, &err);
> > + } else {
> > + skb = allocated_skb;
> > + }
> > if (unlikely(!skb))
> > goto free_err;
> >
> > @@ -818,7 +827,7 @@ static int __xsk_generic_xmit(struct sock *sk)
> > goto out;
> > }
> >
> > - skb = xsk_build_skb(xs, &desc);
> > + skb = xsk_build_skb(xs, NULL, &desc);
> > if (IS_ERR(skb)) {
> > err = PTR_ERR(skb);
> > if (err != -EOVERFLOW)
> > --
> > 2.41.3
> >
Powered by blists - more mailing lists