[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7b2a7b7b-0ebc-1f03-5f1b-ac598fc950dc@suse.cz>
Date: Mon, 20 Mar 2023 16:14:50 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Alexander Duyck <alexander.duyck@...il.com>,
netdev@...r.kernel.org, davem@...emloft.net
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, jannh@...gle.com
Subject: Re: [net PATCH 1/2] mm: Use fixed constant in page_frag_alloc instead
of size + 1
On 2/17/23 10:30, Vlastimil Babka wrote:
> On 2/15/19 23:44, Alexander Duyck wrote:
>> From: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
>>
>> This patch replaces the size + 1 value introduced with the recent fix for 1
>> byte allocs with a constant value.
>>
>> The idea here is to reduce code overhead as the previous logic would have
>> to read size into a register, then increment it, and write it back to
>> whatever field was being used. By using a constant we can avoid those
>> memory reads and arithmetic operations in favor of just encoding the
>> maximum value into the operation itself.
>>
>> Fixes: 2c2ade81741c ("mm: page_alloc: fix ref bias in page_frag_alloc() for 1-byte allocs")
>> Signed-off-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
>> ---
>> mm/page_alloc.c | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index ebb35e4d0d90..37ed14ad0b59 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4857,11 +4857,11 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>> /* Even if we own the page, we do not use atomic_set().
>> * This would break get_page_unless_zero() users.
>> */
>> - page_ref_add(page, size);
>> + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
>
> But this value can be theoretically too low when PAGE_SIZE >
> PAGE_FRAG_CACHE_MAX_SIZE? Such as on architectures with 64kB page size,
> while PAGE_FRAG_CACHE_MAX_SIZE is 32kB?
Nevermind, PAGE_FRAG_CACHE_MAX_SIZE would be 64kB because
#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
So all is fine, sorry for the noise.
Powered by blists - more mailing lists