[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aGwUsDK0u3vegaYq@mini-arch>
Date: Mon, 7 Jul 2025 11:40:48 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
Cc: Maciej Fijalkowski <maciej.fijalkowski@...el.com>, bpf@...r.kernel.org,
ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
netdev@...r.kernel.org, magnus.karlsson@...el.com,
Eryk Kubanski <e.kubanski@...tner.samsung.com>
Subject: Re: [PATCH v2 bpf] xsk: fix immature cq descriptor production
On 07/07, Alexander Lobakin wrote:
> From: Stanislav Fomichev <stfomichev@...il.com>
> Date: Mon, 7 Jul 2025 08:06:21 -0700
>
> > On 07/07, Alexander Lobakin wrote:
> >> From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> >> Date: Sat, 5 Jul 2025 15:55:12 +0200
> >>
> >>> Eryk reported an issue that I have put under Closes: tag, related to
> >>> umem addrs being prematurely produced onto pool's completion queue.
> >>> Let us make the skb's destructor responsible for producing all addrs
> >>> that given skb used.
> >>>
> >>> Commit from fixes tag introduced the buggy behavior, it was not broken
> >>> from day 1, but rather when xsk multi-buffer got introduced.
> >>>
> >>> Introduce a struct which will carry descriptor count with array of
> >>> addresses taken from processed descriptors that will be carried via
> >>> skb_shared_info::destructor_arg. This way we can refer to it within
> >>> xsk_destruct_skb().
> >>>
> >>> To summarize, behavior is changed from:
> >>> - produce addr to cq, increase cq's cached_prod
> >>> - increment descriptor count and store it on
> >>> - (xmit and rest of path...)
> >>> skb_shared_info::destructor_arg
> >>> - use destructor_arg on skb destructor to update global state of cq
> >>> producer
> >>>
> >>> to the following:
> >>> - increment cq's cached_prod
> >>> - increment descriptor count, save xdp_desc::addr in custom array and
> >>> store this custom array on skb_shared_info::destructor_arg
> >>> - (xmit and rest of path...)
> >>> - use destructor_arg on skb destructor to walk the array of addrs and
> >>> write them to cq and finally update global state of cq producer
> >>>
> >>> Fixes: b7f72a30e9ac ("xsk: introduce wrappers and helpers for supporting multi-buffer in Tx path")
> >>> Reported-by: Eryk Kubanski <e.kubanski@...tner.samsung.com>
> >>> Closes: https://lore.kernel.org/netdev/20250530103456.53564-1-e.kubanski@partner.samsung.com/
> >>> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> >>> ---
> >>> v1:
> >>> https://lore.kernel.org/bpf/20250702101648.1942562-1-maciej.fijalkowski@intel.com/
> >>>
> >>> v1->v2:
> >>> * store addrs in array carried via destructor_arg instead having them
> >>> stored in skb headroom; cleaner and less hacky approach;
> >>
> >> Might look cleaner, but what about the performance given that you're
> >> adding a memory allocation?
> >>
> >> (I realize that's only for the skb mode, still)
> >>
> >> Yeah we anyway allocate an skb and may even copy the whole frame, just
> >> curious.
> >> I could recommend using skb->cb for that, but its 48 bytes would cover
> >> only 6 addresses =\
>
> BTW isn't num_descs from that new structure would be the same as
> shinfo->nr_frags + 1 (or just nr_frags for xsk_build_skb_zerocopy())?
So you're saying we don't need to store it? Agreed. But storing the rest
in cb still might be problematic with kconfig-configurable MAX_SKB_FRAGS?
> > Can we pre-allocate an array of xsk_addrs during xsk_bind (the number of
> > xsk_addrs is bound by the tx ring size)? Then we can remove the alloc on tx
> > and replace it with some code to manage that pool of xsk_addrs..
>
> Nice idea BTW.
>
> We could even use system per-cpu Page Pools to allocate these structs*
> :D It wouldn't waste 1 page per one struct as PP is frag-aware and has
> API for allocating only a small frag.
>
> Headroom stuff was also ok to me: we either way allocate a new skb, so
> we could allocate it with a bit bigger headroom and put that table there
> being sure that nobody will overwrite it (some drivers insert special
> headers or descriptors in front of the actual skb->data).
>
> [*] Offtop: we could also use system PP to allocate skbs in
> xsk_build_skb() just like it's done in xdp_build_skb_from_zc() +
> xdp_copy_frags_from_zc() -- no way to avoid memcpy(), but the payload
> buffers would be recycled then.
Or maybe kmem_cache_alloc_node with a custom cache is good enough?
Headroom also feels ok if we store the whole xsk_addrs struct in it.
Powered by blists - more mailing lists