[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <41f30998-e498-4c33-a4b4-99b9f7339fd7@redhat.com>
Date: Thu, 23 Oct 2025 09:24:27 +0200
From: David Hildenbrand <david@...hat.com>
To: Zhang Yi <yi.zhang@...weicloud.com>,
Karol Wachowski <karol.wachowski@...ux.intel.com>
Cc: tytso@....edu, adilger.kernel@...ger.ca, linux-mm@...ck.org,
linux-ext4@...r.kernel.org
Subject: Re: Possible regression in pin_user_pages_fast() behavior after
commit 7ac67301e82f ("ext4: enable large folio for regular file")
>> __split_huge_pmd_locked() contains that handling.
>>
>> We have to do that because we did not preallocate a page table we can just throw in.
>>
>> We could do that on this path instead: remap the PMD to be mapped by a PTE table. We'd have to preallocate a page table.
>>
>> That would avoid the do_pte_missing() below for such faults.
>>
>> that could be done later on top of this fix.
>
> Yeah, thank you for the explanation! I have another question, just curious.
> Why do we have to fall back to installing the PTE table instead of creating
> a new anonymous large folio (2M) and setting a new leaf huge PMD?
Primarily because it would waste more memory for various use cases, on a
factor of 512.
>
>>
>>> | handle_pte_fault() //
>>> | do_pte_missing()
>>> | do_fault()
>>> | do_read_fault() //FAULT_FLAG_WRITE is not set
>>> | finish_fault()
>>> | do_set_pmd() //install leaf pmd again, I think this is wrong!!!
>>> | do_wp_page() //copy private anno pages
>>> <- goto retry
>>>
>>> Due to an incorrectly large PMD set in do_read_fault(), follow_pmd_mask()
>>> always returns -EMLINK, causing an infinite loop. Under normal
>>> circumstances, I suppose it should fall back to do_wp_page(), which installs
>>> the anonymous page into the PTE. This is also why mappings smaller than 2MB
>>> do not trigger this issue. In addition, if you add FOLL_WRITE when calling
>>> pin_user_pages_fast(), it also will not trigger this issue becasue do_fault()
>>> will call do_cow_fault() to create anonymous pages.
>>>
>>> The above is my analysis, and I tried the following fix, which can solve
>>> the issue (I haven't done a full test yet). But I am not expert in the MM
>>> field, I might have missed something, and this needs to be reviewed by MM
>>> experts.
>>>
>>> Best regards,
>>> Yi.
>>>
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index 74b45e258323..64846a030a5b 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -5342,6 +5342,10 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio, struct page *pa
>>> if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
>>> return ret;
>>>
>>> + if (vmf->flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE) &&
>>> + !pmd_write(*vmf->pmd))
>>> + return ret;
>>
>> Likely we would want to make this depend on is_cow_mapping().
>>
>> /*
>> * We're about to trigger CoW, so never map it through a PMD.
>> */
>> if (is_cow_mapping(vma->vm_flags &&
>> vmf->flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)))
>> return ret;
>>
>
> Sure, adding a cow check would be better. I will send out an official patch.
Thanks!
--
Cheers
David / dhildenb
Powered by blists - more mailing lists