lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aLWUIwK7yot1RlRq@boxer>
Date: Mon, 1 Sep 2025 14:40:03 +0200
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To: Jason Xing <kerneljasonxing@...il.com>
CC: <bpf@...r.kernel.org>, <ast@...nel.org>, <daniel@...earbox.net>,
	<andrii@...nel.org>, <netdev@...r.kernel.org>, <magnus.karlsson@...el.com>,
	<stfomichev@...il.com>, Eryk Kubanski <e.kubanski@...tner.samsung.com>
Subject: Re: [PATCH v7 bpf] xsk: fix immature cq descriptor production

On Sat, Aug 30, 2025 at 06:30:23PM +0800, Jason Xing wrote:
> On Sat, Aug 30, 2025 at 2:10 AM Maciej Fijalkowski
> <maciej.fijalkowski@...el.com> wrote:
> >
> > Eryk reported an issue that I have put under Closes: tag, related to
> > umem addrs being prematurely produced onto pool's completion queue.
> > Let us make the skb's destructor responsible for producing all addrs
> > that given skb used.
> >
> > Commit from fixes tag introduced the buggy behavior, it was not broken
> > from day 1, but rather when XSK multi-buffer got introduced.
> >
> > In order to mitigate performance impact as much as possible, mimic the
> > linear and frag parts within skb by storing the first address from XSK
> > descriptor at sk_buff::destructor_arg. For fragments, store them at ::cb
> > via list. The nodes that will go onto list will be allocated via
> > kmem_cache. xsk_destruct_skb() will consume address stored at
> > ::destructor_arg and optionally go through list from ::cb, if count of
> > descriptors associated with this particular skb is bigger than 1.
> >
> > Previous approach where whole array for storing UMEM addresses from XSK
> > descriptors was pre-allocated during first fragment processing yielded
> > too big performance regression for 64b traffic. In current approach
> > impact is much reduced on my tests and for jumbo frames I observed
> > traffic being slower by at most 9%.
> >
> > Magnus suggested to have this way of processing special cased for
> > XDP_SHARED_UMEM, so we would identify this during bind and set different
> > hooks for 'backpressure mechanism' on CQ and for skb destructor, but
> > given that results looked promising on my side I decided to have a
> > single data path for XSK generic Tx. I suppose other auxiliary stuff
> > such as helpers introduced in this patch would have to land as well in
> > order to make it work, so we might have ended up with more noisy diff.
> >
> > Fixes: b7f72a30e9ac ("xsk: introduce wrappers and helpers for supporting multi-buffer in Tx path")
> > Reported-by: Eryk Kubanski <e.kubanski@...tner.samsung.com>
> > Closes: https://lore.kernel.org/netdev/20250530103456.53564-1-e.kubanski@partner.samsung.com/
> > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> > ---
> >
> > Jason, please test this v7 on your setup, I would appreciate if you
> > would report results from your testbed. Thanks!
> 
> Thanks for reworking!
> 
> And I see the performance only goes down by 1-2% on my VM which looks
> much better than before. But I cannot tell where the decrease comes
> from...

That's acceptable IMHO.

> 
> >
> > v1:
> > https://lore.kernel.org/bpf/20250702101648.1942562-1-maciej.fijalkowski@intel.com/
> > v2:
> > https://lore.kernel.org/bpf/20250705135512.1963216-1-maciej.fijalkowski@intel.com/
> > v3:
> > https://lore.kernel.org/bpf/20250806154127.2161434-1-maciej.fijalkowski@intel.com/
> > v4:
> > https://lore.kernel.org/bpf/20250813171210.2205259-1-maciej.fijalkowski@intel.com/
> > v5:
> > https://lore.kernel.org/bpf/aKXBHGPxjpBDKOHq@boxer/T/
> > v6:
> > https://lore.kernel.org/bpf/20250820154416.2248012-1-maciej.fijalkowski@intel.com/
> >
> > v1->v2:
> > * store addrs in array carried via destructor_arg instead having them
> >   stored in skb headroom; cleaner and less hacky approach;
> > v2->v3:
> > * use kmem_cache for xsk_addrs allocation (Stan/Olek)
> > * set err when xsk_addrs allocation fails (Dan)
> > * change xsk_addrs layout to avoid holes
> > * free xsk_addrs on error path
> > * rebase
> > v3->v4:
> > * have kmem_cache as percpu vars
> > * don't drop unnecessary braces (unrelated) (Stan)
> > * use idx + i in xskq_prod_write_addr (Stan)
> > * alloc kmem_cache on bind (Stan)
> > * keep num_descs as first member in xsk_addrs (Magnus)
> > * add ack from Magnus
> > v4->v5:
> > * have a single kmem_cache per xsk subsystem (Stan)
> > v5->v6:
> > * free skb in xsk_build_skb_zerocopy() when xsk_addrs allocation fails
> >   (Stan)
> > * unregister netdev notifier if creating kmem_cache fails (Stan)
> > v6->v7:
> > * don't include Acks from Magnus/Stan; let them review the new
> >   approach:)
> > * store first desc at sk_buff::destructor_arg and rest of frags in list
> >   stored at sk_buff::cb
> > * keep the kmem_cache but don't use it for allocation of whole array at
> >   one shot but rather alloc single nodes of list
> >
> > ---
> >  net/xdp/xsk.c       | 99 ++++++++++++++++++++++++++++++++++++++-------
> >  net/xdp/xsk_queue.h | 12 ++++++
> >  2 files changed, 97 insertions(+), 14 deletions(-)
> >
> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > index 9c3acecc14b1..3d12d1fbda41 100644
> > --- a/net/xdp/xsk.c
> > +++ b/net/xdp/xsk.c
> > @@ -36,6 +36,20 @@
> >  #define TX_BATCH_SIZE 32
> >  #define MAX_PER_SOCKET_BUDGET 32
> >
> > +struct xsk_addr_node {
> > +       u64 addr;
> > +       struct list_head addr_node;
> > +};
> > +
> > +struct xsk_addr_head {
> > +       u32 num_descs;
> > +       struct list_head addrs_list;
> > +};
> > +
> > +static struct kmem_cache *xsk_tx_generic_cache;
> > +
> > +#define XSKCB(skb) ((struct xsk_addr_head *)((skb)->cb))
> > +
> >  void xsk_set_rx_need_wakeup(struct xsk_buff_pool *pool)
> >  {
> >         if (pool->cached_need_wakeup & XDP_WAKEUP_RX)
> > @@ -532,24 +546,41 @@ static int xsk_wakeup(struct xdp_sock *xs, u8 flags)
> >         return dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags);
> >  }
> >
> > -static int xsk_cq_reserve_addr_locked(struct xsk_buff_pool *pool, u64 addr)
> > +static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool)
> >  {
> >         unsigned long flags;
> >         int ret;
> >
> >         spin_lock_irqsave(&pool->cq_lock, flags);
> > -       ret = xskq_prod_reserve_addr(pool->cq, addr);
> > +       ret = xskq_prod_reserve(pool->cq);
> >         spin_unlock_irqrestore(&pool->cq_lock, flags);
> >
> >         return ret;
> >  }
> >
> > -static void xsk_cq_submit_locked(struct xsk_buff_pool *pool, u32 n)
> > +static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool,
> > +                                     struct sk_buff *skb)
> >  {
> > +       struct xsk_addr_node *pos, *tmp;
> >         unsigned long flags;
> > +       u32 i = 0;
> > +       u32 idx;
> >
> >         spin_lock_irqsave(&pool->cq_lock, flags);
> > -       xskq_prod_submit_n(pool->cq, n);
> > +       idx = xskq_get_prod(pool->cq);
> > +
> > +       xskq_prod_write_addr(pool->cq, idx, (u64)skb_shinfo(skb)->destructor_arg);
> > +       i++;
> > +
> > +       if (unlikely(XSKCB(skb)->num_descs > 1)) {
> 
> IIUC, the line you lately added is used to see if it matches the case?
> But the condition is still a bit loose. How about adding a more strict
> condition: testing whether the umem is shared or not?

This is a different case. You have to be able to deal with multi-buffer
frames regardless of shared umem setting. Checking shared umem would have
to happen at bind time and then we would set up callbacks appropriately.
These callbacks would be about work done against CQ in xmit path and
destructor.
Since tests on my side showed acceptable impact for multi-buffer traffic,
I decided to go with a single data path approach.


I wrote a paragraph explaining it a bit in the commit message, let me
paste it here for some attention:

Magnus suggested to have this way of processing special cased for
XDP_SHARED_UMEM, so we would identify this during bind and set different
hooks for 'backpressure mechanism' on CQ and for skb destructor, but
given that results looked promising on my side I decided to have a
single data path for XSK generic Tx. I suppose other auxiliary stuff
such as helpers introduced in this patch would have to land as well in
order to make it work, so we might have ended up with more noisy diff.

> 
> Thanks,
> Jason
> 
> > +               list_for_each_entry_safe(pos, tmp, &XSKCB(skb)->addrs_list, addr_node) {
> > +                       xskq_prod_write_addr(pool->cq, idx + i, pos->addr);
> > +                       i++;
> > +                       list_del(&pos->addr_node);
> > +                       kmem_cache_free(xsk_tx_generic_cache, pos);
> > +               }
> > +       }
> > +       xskq_prod_submit_n(pool->cq, i);
> >         spin_unlock_irqrestore(&pool->cq_lock, flags);
> >  }
> >
> > @@ -562,9 +593,14 @@ static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n)
> >         spin_unlock_irqrestore(&pool->cq_lock, flags);

(...)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ