[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <977d55210bfcb4f454b9d740fcbe6c451079a086.camel@gmail.com>
Date: Tue, 30 May 2023 08:07:28 -0700
From: Alexander H Duyck <alexander.duyck@...il.com>
To: Yunsheng Lin <linyunsheng@...wei.com>, davem@...emloft.net,
kuba@...nel.org, pabeni@...hat.com
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org, Lorenzo Bianconi
<lorenzo@...nel.org>, Jesper Dangaard Brouer <hawk@...nel.org>, Ilias
Apalodimas <ilias.apalodimas@...aro.org>, Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH net-next v2 2/3] page_pool: support non-frag page for
page_pool_alloc_frag()
On Mon, 2023-05-29 at 17:28 +0800, Yunsheng Lin wrote:
> There is performance penalty with using page frag support when
> user requests a larger frag size and a page only supports one
> frag user, see [1].
>
> It seems like user may request different frag size depending
> on the mtu and packet size, provide an option to allocate
> non-frag page when a whole page is not able to hold two frags,
> so that user has a unified interface for the memory allocation
> with least memory utilization and performance penalty.
>
> 1. https://lore.kernel.org/netdev/ZEU+vospFdm08IeE@localhost.localdomain/
>
> Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
> CC: Lorenzo Bianconi <lorenzo@...nel.org>
> CC: Alexander Duyck <alexander.duyck@...il.com>
> ---
> net/core/page_pool.c | 47 +++++++++++++++++++++++++++-----------------
> 1 file changed, 29 insertions(+), 18 deletions(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 0868aa8f6323..e84ec6eabefd 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -699,14 +699,27 @@ struct page *page_pool_alloc_frag(struct page_pool *pool,
> unsigned int max_size = PAGE_SIZE << pool->p.order;
> struct page *page = pool->frag_page;
>
> - if (WARN_ON(!(pool->p.flags & PP_FLAG_PAGE_FRAG) ||
> - size > max_size))
> + if (unlikely(size > max_size))
> return NULL;
>
> + if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) {
> + *offset = 0;
> + return page_pool_alloc_pages(pool, gfp);
> + }
> +
This is a recipe for pain. Rather than doing this I would say we should
stick with our existing behavior and not allow page pool fragments to
be used when the DMA address is consuming the region. Otherwise we are
going to make things very confusing.
If we have to have both version I would much rather just have some
inline calls in the header wrapped in one #ifdef for
PAGE_POOL_DMA_USE_PP_FRAG_COUNT that basically are a wrapper for
page_pool pages treated as pp_frag.
> size = ALIGN(size, dma_get_cache_alignment());
> - *offset = pool->frag_offset;
>
If we are going to be allocating mono-frag pages they should be
allocated here based on the size check. That way we aren't discrupting
the performance for the smaller fragments and the code below could
function undisturbed.
> - if (page && *offset + size > max_size) {
> + if (page) {
> + *offset = pool->frag_offset;
> +
> + if (*offset + size <= max_size) {
> + pool->frag_users++;
> + pool->frag_offset = *offset + size;
> + alloc_stat_inc(pool, fast);
> + return page;
> + }
> +
> + pool->frag_page = NULL;
> page = page_pool_drain_frag(pool, page);
> if (page) {
> alloc_stat_inc(pool, fast);
> @@ -714,26 +727,24 @@ struct page *page_pool_alloc_frag(struct page_pool *pool,
> }
> }
>
> - if (!page) {
> - page = page_pool_alloc_pages(pool, gfp);
> - if (unlikely(!page)) {
> - pool->frag_page = NULL;
> - return NULL;
> - }
> -
> - pool->frag_page = page;
> + page = page_pool_alloc_pages(pool, gfp);
> + if (unlikely(!page))
> + return NULL;
>
> frag_reset:
> - pool->frag_users = 1;
> + /* return page as non-frag page if a page is not able to
> + * hold two frags for the current requested size.
> + */
This statement ins't exactly true since you make all page pool pages
into fragmented pages.
> + if (unlikely(size << 1 > max_size)) {
This should happen much sooner so you aren't mixing these allocations
with the smaller ones and forcing the fragmented page to be evicted.
> *offset = 0;
> - pool->frag_offset = size;
> - page_pool_fragment_page(page, BIAS_MAX);
> return page;
> }
>
> - pool->frag_users++;
> - pool->frag_offset = *offset + size;
> - alloc_stat_inc(pool, fast);
> + pool->frag_page = page;
> + pool->frag_users = 1;
> + *offset = 0;
> + pool->frag_offset = size;
> + page_pool_fragment_page(page, BIAS_MAX);
> return page;
> }
> EXPORT_SYMBOL(page_pool_alloc_frag);
Powered by blists - more mailing lists