lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9185c50a-1427-45fc-941d-e9796cea4831@redhat.com>
Date: Mon, 30 Jun 2025 11:19:13 +0200
From: David Hildenbrand <david@...hat.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 Andrew Morton <akpm@...ux-foundation.org>,
 "Liam R. Howlett" <Liam.Howlett@...cle.com>, Vlastimil Babka
 <vbabka@...e.cz>, Jann Horn <jannh@...gle.com>,
 Mike Rapoport <rppt@...nel.org>, Suren Baghdasaryan <surenb@...gle.com>,
 Michal Hocko <mhocko@...e.com>, Zi Yan <ziy@...dia.com>,
 Matthew Brost <matthew.brost@...el.com>,
 Joshua Hahn <joshua.hahnjy@...il.com>, Rakie Kim <rakie.kim@...com>,
 Byungchul Park <byungchul@...com>, Gregory Price <gourry@...rry.net>,
 Ying Huang <ying.huang@...ux.alibaba.com>,
 Alistair Popple <apopple@...dia.com>, Pedro Falcato <pfalcato@...e.de>,
 Rik van Riel <riel@...riel.com>, Harry Yoo <harry.yoo@...cle.com>
Subject: Re: [PATCH v1 3/4] mm: split folio_pte_batch() into folio_pte_batch()
 and folio_pte_batch_ext()

On 27.06.25 20:48, Lorenzo Stoakes wrote:
> On Fri, Jun 27, 2025 at 01:55:09PM +0200, David Hildenbrand wrote:
>> Many users (including upcoming ones) don't really need the flags etc,
>> and can live with a function call.
>>
>> So let's provide a basic, non-inlined folio_pte_batch().
> 
> Hm, but why non-inlined, when it invokes an inlined function? Seems odd no?

We want to always generate a function that uses as little runtime checks 
as possible. Essentially, optimize out the "flags" as much as possible.

In case of folio_pte_batch(), where we won't use any flags, any checks 
will be optimized out by the compiler.

So we get a single, specialized, non-inlined function.

> 
>>
>> In zap_present_ptes(), where we care about performance, the compiler
>> already seem to generate a call to a common inlined folio_pte_batch()
>> variant, shared with fork() code. So calling the new non-inlined variant
>> should not make a difference.
>>
>> While at it, drop the "addr" parameter that is unused.
>>
>> Signed-off-by: David Hildenbrand <david@...hat.com>
> 
> Other than the query above + nit on name below, this is really nice!
> 
>> ---
>>   mm/internal.h  | 11 ++++++++---
>>   mm/madvise.c   |  4 ++--
>>   mm/memory.c    |  6 ++----
>>   mm/mempolicy.c |  3 +--
>>   mm/mlock.c     |  3 +--
>>   mm/mremap.c    |  3 +--
>>   mm/rmap.c      |  3 +--
>>   mm/util.c      | 29 +++++++++++++++++++++++++++++
>>   8 files changed, 45 insertions(+), 17 deletions(-)
>>
>> diff --git a/mm/internal.h b/mm/internal.h
>> index ca6590c6d9eab..6000b683f68ee 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -218,9 +218,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
>>   }
>>
>>   /**
>> - * folio_pte_batch - detect a PTE batch for a large folio
>> + * folio_pte_batch_ext - detect a PTE batch for a large folio
>>    * @folio: The large folio to detect a PTE batch for.
>> - * @addr: The user virtual address the first page is mapped at.
>>    * @ptep: Page table pointer for the first entry.
>>    * @pte: Page table entry for the first page.
>>    * @max_nr: The maximum number of table entries to consider.
>> @@ -243,9 +242,12 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
>>    * must be limited by the caller so scanning cannot exceed a single VMA and
>>    * a single page table.
>>    *
>> + * This function will be inlined to optimize based on the input parameters;
>> + * consider using folio_pte_batch() instead if applicable.
>> + *
>>    * Return: the number of table entries in the batch.
>>    */
>> -static inline unsigned int folio_pte_batch(struct folio *folio, unsigned long addr,
>> +static inline unsigned int folio_pte_batch_ext(struct folio *folio,
>>   		pte_t *ptep, pte_t pte, unsigned int max_nr, fpb_t flags,
>>   		bool *any_writable, bool *any_young, bool *any_dirty)
> 
> Sorry this is really really annoying feedback :P but _ext() makes me think of
> page_ext and ugh :))
> 
> Wonder if __folio_pte_batch() is better?
> 
> This is obviously, not a big deal (TM)

Obviously, I had that as part of the development, and decided against it 
at some point. :)

Yeah, _ext() is not common in MM yet, in contrast to other subsystems. 
The only user is indeed page_ext. On arm we seem to have set_pte_ext(). 
But it's really "page_ext", that's the problematic part, not "_ext" :P

No strong opinion, but I tend to dislike here "__", because often it 
means "internal helper you're not supposed to used", which isn't really 
the case here.

E.g.,

alloc_frozen_pages() -> alloc_frozen_pages_noprof() -> 
__alloc_frozen_pages_noprof()

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ