lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5e65d43c-787c-4e42-9aa6-6f84018c6f33@redhat.com>
Date: Mon, 14 Jul 2025 16:26:42 +0200
From: David Hildenbrand <david@...hat.com>
To: Luiz Capitulino <luizcap@...hat.com>, willy@...radead.org,
 akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, shivankg@....com,
 sj@...nel.org, harry.yoo@...cle.com
Subject: Re: [PATCH v3 1/4] mm/memory: introduce is_huge_zero_pfn() and use it
 in vm_normal_page_pmd()

On 14.07.25 15:16, Luiz Capitulino wrote:
> From: David Hildenbrand <david@...hat.com>
> 
> Let's avoid working with the PMD when not required. If
> vm_normal_page_pmd() would be called on something that is not a present
> pmd, it would already be a bug (pfn possibly garbage).
> 
> While at it, let's support passing in any pfn covered by the huge zero
> folio by masking off PFN bits -- which should be rather cheap.
> 
> Signed-off-by: David Hildenbrand <david@...hat.com>
> Reviewed-by: Oscar Salvador <osalvador@...e.de>
> Signed-off-by: Luiz Capitulino <luizcap@...hat.com>
> ---
>   include/linux/huge_mm.h | 12 +++++++++++-
>   mm/memory.c             |  2 +-
>   2 files changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 2f190c90192d..59e93fba15f4 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -486,9 +486,14 @@ static inline bool is_huge_zero_folio(const struct folio *folio)
>   	return READ_ONCE(huge_zero_folio) == folio;
>   }
>   
> +static inline bool is_huge_zero_pfn(unsigned long pfn)
> +{
> +	return READ_ONCE(huge_zero_pfn) == (pfn & ~(HPAGE_PMD_NR - 1));
> +}
> +
>   static inline bool is_huge_zero_pmd(pmd_t pmd)
>   {
> -	return pmd_present(pmd) && READ_ONCE(huge_zero_pfn) == pmd_pfn(pmd);
> +	return pmd_present(pmd) && is_huge_zero_pfn(pmd_pfn(pmd));
>   }
>   
>   struct folio *mm_get_huge_zero_folio(struct mm_struct *mm);
> @@ -636,6 +641,11 @@ static inline bool is_huge_zero_folio(const struct folio *folio)
>   	return false;
>   }
>   
> +static inline bool is_huge_zero_pfn(unsigned long pfn)
> +{
> +	return false;
> +}
> +
>   static inline bool is_huge_zero_pmd(pmd_t pmd)
>   {
>   	return false;
> diff --git a/mm/memory.c b/mm/memory.c
> index b0cda5aab398..3a765553bacb 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -687,7 +687,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>   
>   	if (pmd_devmap(pmd))
>   		return NULL;

This likely doesn't apply as-is on top of mm-unstable / mm-new (and 
likely also not mm-stable).

Should be trivial to fixup, though.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ