[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d87da24-7887-4912-abcf-14062e8514de@linux.alibaba.com>
Date: Wed, 8 May 2024 17:31:57 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Ryan Roberts <ryan.roberts@....com>, akpm@...ux-foundation.org,
hughd@...gle.com
Cc: willy@...radead.org, david@...hat.com, ioworker0@...il.com,
wangkefeng.wang@...wei.com, ying.huang@...el.com, 21cnbao@...il.com,
shy828301@...il.com, ziy@...dia.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/8] mm: memory: extend finish_fault() to support large
folio
On 2024/5/8 16:53, Ryan Roberts wrote:
> On 08/05/2024 04:44, Baolin Wang wrote:
>>
>>
>> On 2024/5/7 18:37, Ryan Roberts wrote:
>>> On 06/05/2024 09:46, Baolin Wang wrote:
>>>> Add large folio mapping establishment support for finish_fault() as a
>>>> preparation,
>>>> to support multi-size THP allocation of anonymous shmem pages in the following
>>>> patches.
>>>>
>>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>>> ---
>>>> mm/memory.c | 43 +++++++++++++++++++++++++++++++++----------
>>>> 1 file changed, 33 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index eea6e4984eae..936377220b77 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -4747,9 +4747,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
>>>> {
>>>> struct vm_area_struct *vma = vmf->vma;
>>>> struct page *page;
>>>> + struct folio *folio;
>>>> vm_fault_t ret;
>>>> bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
>>>> !(vma->vm_flags & VM_SHARED);
>>>> + int type, nr_pages, i;
>>>> + unsigned long addr = vmf->address;
>>>> /* Did we COW the page? */
>>>> if (is_cow)
>>>> @@ -4780,24 +4783,44 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
>>>> return VM_FAULT_OOM;
>>>> }
>>>> + folio = page_folio(page);
>>>> + nr_pages = folio_nr_pages(folio);
>>>> +
>>>> + if (unlikely(userfaultfd_armed(vma))) {
>>>> + nr_pages = 1;
>>>> + } else if (nr_pages > 1) {
>>>> + unsigned long start = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE);
>>>> + unsigned long end = start + nr_pages * PAGE_SIZE;
>>>> +
>>>> + /* In case the folio size in page cache beyond the VMA limits. */
>>>> + addr = max(start, vma->vm_start);
>>>> + nr_pages = (min(end, vma->vm_end) - addr) >> PAGE_SHIFT;
>>>> +
>>>> + page = folio_page(folio, (addr - start) >> PAGE_SHIFT);
>>>
>>> I still don't really follow the logic in this else if block. Isn't it possible
>>> that finish_fault() gets called with a page from a folio that isn't aligned with
>>> vmf->address?
>>>
>>> For example, let's say we have a file who's size is 64K and which is cached in a
>>> single large folio in the page cache. But the file is mapped into a process at
>>> VA 16K to 80K. Let's say we fault on the first page (VA=16K). You will calculate
>>
>> For shmem, this doesn't happen because the VA is aligned with the hugepage size
>> in the shmem_get_unmapped_area() function. See patch 7.
>
> Certainly agree that shmem can always make sure that it packs a vma in a way
> such that its folios are naturally aligned in VA when faulting in memory. If you
> mremap it, that alignment will be lost; I don't think that would be a problem
When mremap it, it will also call shmem_get_unmapped_area() to align the
VA, but for mremap() with MAP_FIXED flag as David pointed out, yes, this
patch may be not work perfectly.
> for a single process; mremap will take care of moving the ptes correctly and
> this path is not involved.
>
> But what about the case when a process mmaps a shmem region, then forks, then
> the child mremaps the shmem region. Then the parent faults in a THP into the
> region (nicely aligned). Then the child faults in the same offset in the region
> and gets the THP that the parent allocated; that THP will be aligned in the
> parent's VM space but not in the child's.
Sorry, I did not get your point here. IIUC, the child's VA will also be
aligned if the child mremap is not set MAP_FIXED, since the child's
mremap will still call shmem_get_unmapped_area() to find an aligned new
VA. Please correct me if I missed your point.
>>> start=0 and end=64K I think?
>>
>> Yes. Unfortunately, some file systems that support large mappings do not perform
>> alignment for multi-size THP (non-PMD sized, for example: 64K). I think this
>> requires modification to __get_unmapped_area--->thp_get_unmapped_area_vmflags()
>> or file->f_op->get_unmapped_area() to align VA for multi-size THP in future.
>
> By nature of the fact that a file mapping is shared between multiple processes
> and each process can map it where ever it wants down to 1 page granularity, its
> impossible for any THP containing a part of that file to be VA-aligned in every
> process it is mapped in.
Yes, so let me re-polish this patch. Thanks.
>> So before adding that VA alignment changes, only allow building the large folio
>> mapping for anonymous shmem:
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 936377220b77..9e4d51826d23 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4786,7 +4786,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
>> folio = page_folio(page);
>> nr_pages = folio_nr_pages(folio);
>>
>> - if (unlikely(userfaultfd_armed(vma))) {
>> + if (unlikely(userfaultfd_armed(vma)) || !vma_is_anon_shmem(vma)) {
>
> If the above theoretical flow for fork & mremap is valid, then I don't think
> this is sufficient.
>
>> nr_pages = 1;
>> } else if (nr_pages > 1) {
>> unsigned long start = ALIGN_DOWN(vmf->address, nr_pages *
>> PAGE_SIZE);
>>
>>> Additionally, I think this path will end up mapping the entire folio (as long as
>>> it fits in the VMA). But this bypasses the fault-around configuration. As I
>>> think I mentioned against the RFC, this will inflate the RSS of the process and
>>> can cause behavioural changes as a result. I believe the current advice is to
>>> disable fault-around to prevent this kind of bloat when needed.
>>
>> With above change, I do not think this is a problem? since users already want to
>> use mTHP for anonymous shmem.
>>
>>> It might be that you need a special variant of finish_fault() for shmem?
>>>
>>>
>>>> + }
>>>> vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
>>>> - vmf->address, &vmf->ptl);
>>>> + addr, &vmf->ptl);
>>>> if (!vmf->pte)
>>>> return VM_FAULT_NOPAGE;
>>>> /* Re-check under ptl */
>>>> - if (likely(!vmf_pte_changed(vmf))) {
>>>> - struct folio *folio = page_folio(page);
>>>> - int type = is_cow ? MM_ANONPAGES : mm_counter_file(folio);
>>>> -
>>>> - set_pte_range(vmf, folio, page, 1, vmf->address);
>>>> - add_mm_counter(vma->vm_mm, type, 1);
>>>> - ret = 0;
>>>> - } else {
>>>> - update_mmu_tlb(vma, vmf->address, vmf->pte);
>>>> + if (nr_pages == 1 && unlikely(vmf_pte_changed(vmf))) {
>>>> + update_mmu_tlb(vma, addr, vmf->pte);
>>>> + ret = VM_FAULT_NOPAGE;
>>>> + goto unlock;
>>>> + } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
>>>> + for (i = 0; i < nr_pages; i++)
>>>> + update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
>>>> ret = VM_FAULT_NOPAGE;
>>>> + goto unlock;
>>>> }
>>>> + set_pte_range(vmf, folio, page, nr_pages, addr);
>>>> + type = is_cow ? MM_ANONPAGES : mm_counter_file(folio);
>>>> + add_mm_counter(vma->vm_mm, type, nr_pages);
>>>> + ret = 0;
>>>> +
>>>> +unlock:
>>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>> return ret;
>>>> }
Powered by blists - more mailing lists