[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <36d42e20-b33f-5442-0db7-e9f5ef9d0941@huawei.com>
Date: Thu, 2 Mar 2023 10:30:13 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>
CC: Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Larysa Zaremba <larysa.zaremba@...el.com>,
Toke Høiland-Jørgensen <toke@...hat.com>,
Song Liu <song@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Jakub Kicinski <kuba@...nel.org>, <bpf@...r.kernel.org>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH bpf-next v1 1/2] xdp: recycle Page Pool backed skbs built
from XDP frames
On 2023/3/2 0:03, Alexander Lobakin wrote:
> __xdp_build_skb_from_frame() state(d):
>
> /* Until page_pool get SKB return path, release DMA here */
>
> Page Pool got skb pages recycling in April 2021, but missed this
> function.
>
> xdp_release_frame() is relevant only for Page Pool backed frames and it
> detaches the page from the corresponding Pool in order to make it
> freeable via page_frag_free(). It can instead just mark the output skb
> as eligible for recycling if the frame is backed by a PP. No change for
> other memory model types (the same condition check as before).
> cpumap redirect and veth on Page Pool drivers now become zero-alloc (or
> almost).
>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> ---
> net/core/xdp.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/core/xdp.c b/net/core/xdp.c
> index 8c92fc553317..a2237cfca8e9 100644
> --- a/net/core/xdp.c
> +++ b/net/core/xdp.c
> @@ -658,8 +658,8 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf,
> * - RX ring dev queue index (skb_record_rx_queue)
> */
>
> - /* Until page_pool get SKB return path, release DMA here */
> - xdp_release_frame(xdpf);
> + if (xdpf->mem.type == MEM_TYPE_PAGE_POOL)
> + skb_mark_for_recycle(skb);
We both rely on both skb->pp_recycle and page->pp_magic to decide
the page is really from page pool. So there was a few corner case
problem when we are sharing a page for different skb in the driver
level or calling skb_clone() or skb_try_coalesce().
see:
https://github.com/torvalds/linux/commit/2cc3aeb5ecccec0d266813172fcd82b4b5fa5803
https://lore.kernel.org/netdev/MW5PR15MB51214C0513DB08A3607FBC1FBDE19@MW5PR15MB5121.namprd15.prod.outlook.com/t/
https://lore.kernel.org/netdev/167475990764.1934330.11960904198087757911.stgit@localhost.localdomain/
As the 'struct xdp_frame' also use 'struct skb_shared_info' which is
sharable, see xdp_get_shared_info_from_frame().
For now xdpf_clone() does not seems to handling frag page yet,
so it should be fine for now.
IMHO we should find a way to use per-page marker, instead of both
per-skb and per-page markers, in order to avoid the above problem
for xdp if xdp has a similar processing as skb, as suggested by Eric.
https://lore.kernel.org/netdev/CANn89iKgZU4Q+THXupzZi4hETuKuCOvOB=iHpp5JzQTNv_Fg_A@mail.gmail.com/
>
> /* Allow SKB to reuse area used by xdp_frame */
> xdp_scrub_frame(xdpf);
>
Powered by blists - more mailing lists