[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c85a4ecc-80bb-d78f-d72a-0f820fb02eb9@redhat.com>
Date: Thu, 23 Sep 2021 14:08:03 +0200
From: Jesper Dangaard Brouer <jbrouer@...hat.com>
To: Yunsheng Lin <linyunsheng@...wei.com>, davem@...emloft.net,
kuba@...nel.org
Cc: brouer@...hat.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linuxarm@...neuler.org,
hawk@...nel.org, ilias.apalodimas@...aro.org,
jonathan.lemon@...il.com, alobakin@...me, willemb@...gle.com,
cong.wang@...edance.com, pabeni@...hat.com, haokexin@...il.com,
nogikh@...gle.com, elver@...gle.com, memxor@...il.com,
edumazet@...gle.com, alexander.duyck@...il.com, dsahern@...il.com
Subject: Re: [PATCH net-next 2/7] page_pool: support non-split page with
PP_FLAG_PAGE_FRAG
On 22/09/2021 11.41, Yunsheng Lin wrote:
> Currently when PP_FLAG_PAGE_FRAG is set, the caller is not
> expected to call page_pool_alloc_pages() directly because of
> the PP_FLAG_PAGE_FRAG checking in __page_pool_put_page().
>
> The patch removes the above checking to enable non-split page
> support when PP_FLAG_PAGE_FRAG is set.
>
> Reviewed-by: Alexander Duyck <alexanderduyck@...com>
> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
> ---
> net/core/page_pool.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index a65bd7972e37..f7e71dcb6a2e 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -315,11 +315,14 @@ struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp)
>
> /* Fast-path: Get a page from cache */
> page = __page_pool_get_cached(pool);
> - if (page)
> - return page;
>
> /* Slow-path: cache empty, do real allocation */
> - page = __page_pool_alloc_pages_slow(pool, gfp);
> + if (!page)
> + page = __page_pool_alloc_pages_slow(pool, gfp);
> +
> + if (likely(page))
> + page_pool_set_frag_count(page, 1);
> +
I really don't like that you add one atomic_long_set operation per page
alloc call.
This is a fast-path for XDP use-cases, which you are ignoring as you
drivers doesn't implement XDP.
As I cannot ask you to run XDP benchmarks, I fortunately have some
page_pool specific microbenchmarks you can run instead.
I will ask you to provide before and after results from running these
benchmarks [1] and [2].
[1]
https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_simple.c
[2]
https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_cross_cpu.c
How to use these module is documented here[3]:
[3]
https://prototype-kernel.readthedocs.io/en/latest/prototype-kernel/build-process.html
> return page;
> }
> EXPORT_SYMBOL(page_pool_alloc_pages);
> @@ -428,8 +431,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
> unsigned int dma_sync_size, bool allow_direct)
> {
> /* It is not the last user for the page frag case */
> - if (pool->p.flags & PP_FLAG_PAGE_FRAG &&
> - page_pool_atomic_sub_frag_count_return(page, 1))
> + if (page_pool_atomic_sub_frag_count_return(page, 1))
> return NULL;
This adds an atomic_long_read, even when PP_FLAG_PAGE_FRAG is not set.
>
> /* This allocator is optimized for the XDP mode that uses
>
Powered by blists - more mailing lists