lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9bfd8972-4111-4afb-b8a2-f514ae67f67f@redhat.com>
Date: Fri, 24 Oct 2025 09:47:28 +0200
From: David Hildenbrand <david@...hat.com>
To: Zhang Yi <yi.zhang@...weicloud.com>, linux-mm@...ck.org
Cc: linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
 yi.zhang@...wei.com, karol.wachowski@...ux.intel.com,
 wangkefeng.wang@...wei.com, yangerkun@...wei.com
Subject: Re: [PATCH] mm: do not install PMD mappings when handling a COW fault

On 24.10.25 03:54, Zhang Yi wrote:
> From: Zhang Yi <yi.zhang@...wei.com>
> 
> During the ping of user pages in FOLL_LONGTERM on a COW VMA and a

s/ping/pin/ or better "When pinning a page with FOLL_LONGTERM in a CoW 
VMA ..."

> PMD-aligned (2MB on x86) large folio, follow_page_mask() failed to
> obtain a valid anonymous page, resulting in an infinite loop issue.
> The specific triggering process is as follows:
> 
> 1. User call mmap with a 2MB size in MAP_PRIVATE mode for a file that
>     has a 2MB large folio installed in the page cache.
> 
>     addr = mmap(NULL, 2*1024*1024, PROT_READ, MAP_PRIVATE, file_fd, 0);
> 
> 2. The kernel driver pass this mapped address to pin_user_pages_fast()
>     in FOLL_LONGTERM mode.
> 
>     pin_user_pages_fast(addr, 512, FOLL_LONGTERM, pages);
> 
>    ->  pin_user_pages_fast()
>    |   gup_fast_fallback()
>    |    __gup_longterm_locked()
>    |     __get_user_pages_locked()
>    |      __get_user_pages()
>    |       follow_page_mask()
>    |        follow_p4d_mask()
>    |         follow_pud_mask()
>    |          follow_pmd_mask() //pmd_leaf(pmdval) is true because the
>    |                            //huge PMD is installed. This is normal
>    |                            //in the first round, but it shouldn't
>    |                            //happen in the second round.
>    |           follow_huge_pmd() //require an anonymous page
>    |            return -EMLINK;
>    |   faultin_page()
>    |    handle_mm_fault()
>    |     wp_huge_pmd() //remove PMD and fall back to PTE
>    |     handle_pte_fault()
>    |      do_pte_missing()
>    |       do_fault()
>    |        do_read_fault() //FAULT_FLAG_WRITE is not set
>    |         finish_fault()
>    |          do_set_pmd() //install a huge PMD again, this is wrong!!!
>    |      do_wp_page() //create private anonymous pages
>    <-    goto retry;
> 
> Due to an incorrectly large PMD set in do_read_fault(),
> follow_pmd_mask() always returns -EMLINK, causing an infinite loop.
> 
> David pointed out that we can preallocate a page table and remap the PMD
> to be mapped by a PTE table in wp_huge_pmd() in the future. But now we
> can avoid this issue by not installing PMD mappings when handling a COW
> and unshare fault in do_set_pmd().
> 
> Fixes: a7f226604170 ("mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page")
> Reported-by: Karol Wachowski <karol.wachowski@...ux.intel.com>
> Closes: https://lore.kernel.org/linux-ext4/844e5cd4-462e-4b88-b3b5-816465a3b7e3@linux.intel.com/
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Zhang Yi <yi.zhang@...wei.com>
> ---
>   mm/memory.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 0ba4f6b71847..0748a31367df 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5212,6 +5212,11 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio, struct page *pa
>   	if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
>   		return ret;
>   
> +	/* We're about to trigger CoW, so never map it through a PMD. */
> +	if (is_cow_mapping(vma->vm_flags) &&
> +	    (vmf->flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)))
> +		return ret;
> +
>   	if (folio_order(folio) != HPAGE_PMD_ORDER)
>   		return ret;
>   	page = &folio->page;

Acked-by: David Hildenbrand <david@...hat.com>

Thanks!

-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ