[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d68edefb-4930-a9cf-1150-9bd2a2a9a02f@suse.cz>
Date: Fri, 17 Feb 2023 10:30:05 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Alexander Duyck <alexander.duyck@...il.com>,
netdev@...r.kernel.org, davem@...emloft.net
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, jannh@...gle.com
Subject: Re: [net PATCH 1/2] mm: Use fixed constant in page_frag_alloc instead
of size + 1
On 2/15/19 23:44, Alexander Duyck wrote:
> From: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
>
> This patch replaces the size + 1 value introduced with the recent fix for 1
> byte allocs with a constant value.
>
> The idea here is to reduce code overhead as the previous logic would have
> to read size into a register, then increment it, and write it back to
> whatever field was being used. By using a constant we can avoid those
> memory reads and arithmetic operations in favor of just encoding the
> maximum value into the operation itself.
>
> Fixes: 2c2ade81741c ("mm: page_alloc: fix ref bias in page_frag_alloc() for 1-byte allocs")
> Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> ---
> mm/page_alloc.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ebb35e4d0d90..37ed14ad0b59 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4857,11 +4857,11 @@ void *page_frag_alloc(struct page_frag_cache *nc,
> /* Even if we own the page, we do not use atomic_set().
> * This would break get_page_unless_zero() users.
> */
> - page_ref_add(page, size);
> + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
But this value can be theoretically too low when PAGE_SIZE >
PAGE_FRAG_CACHE_MAX_SIZE? Such as on architectures with 64kB page size,
while PAGE_FRAG_CACHE_MAX_SIZE is 32kB?
Maybe impossible to exploit in practice thanks to the minimum alignment, but
still IMHO we should be using the larger of PAGE_FRAG_CACHE_MAX_SIZE and
PAGE_SIZE, which should still be a build-time constant, so not defeat the
optimization.
>
> /* reset page count bias and offset to start of new frag */
> nc->pfmemalloc = page_is_pfmemalloc(page);
> - nc->pagecnt_bias = size + 1;
> + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> nc->offset = size;
> }
>
> @@ -4877,10 +4877,10 @@ void *page_frag_alloc(struct page_frag_cache *nc,
> size = nc->size;
> #endif
> /* OK, page count is 0, we can safely set it */
> - set_page_count(page, size + 1);
> + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
>
> /* reset page count bias and offset to start of new frag */
> - nc->pagecnt_bias = size + 1;
> + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> offset = size - fragsz;
> }
>
>
Powered by blists - more mailing lists