[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b8258f91-ad92-419e-a0a1-a8db706c814c@redhat.com>
Date: Tue, 1 Jul 2025 15:08:12 +0200
From: David Hildenbrand <david@...hat.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, akpm@...ux-foundation.org,
hughd@...gle.com
Cc: ziy@...dia.com, lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com,
npache@...hat.com, ryan.roberts@....com, dev.jain@....com,
baohua@...nel.org, vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com,
mhocko@...e.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: support large mapping building for tmpfs
On 01.07.25 10:40, Baolin Wang wrote:
Nit: talking about "large mappings" is confusing. Did you actually mean:
"mm: fault in complete folios instead of individual pages for tmpfs"
I suggest not talking about "large mappings" anywhere in this patch
description, and instead talking about mapping multiple consecutive
pages of a tmpfs folios at once instead.
> After commit acd7ccb284b8 ("mm: shmem: add large folio support for tmpfs"),
> tmpfs can also support large folio allocation (not just PMD-sized large
> folios).
>
> However, when accessing tmpfs via mmap(), although tmpfs supports large folios,
> we still establish mappings at the base page granularity, which is unreasonable.
> > We can establish large mappings according to the size of the large
folio. On one
> hand, this can reduce the overhead of page faults; on the other hand, it can
> leverage hardware architecture optimizations to reduce TLB misses, such as
> contiguous PTEs on the ARM architecture.
The latter would still apply if faulting in each individual page I
guess. cont-pte will try to auto-optimize IIRC.
>
> Moreover, since the user has already added the 'huge=' option when mounting tmpfs
> to allow for large folio allocation, establishing large folios' mapping is expected
> and will not surprise users by inflating the RSS of the process.
Hm, are we sure about that? Also, how does fault_around_bytes interact here?
>
> In order to support large mappings for tmpfs, besides checking VMA limits and
> PMD pagetable limits, it is also necessary to check if the linear page offset
> of the VMA is order-aligned within the file.
Why?
This only applies to PMD mappings. See below.
>
> Performance test:
> I created a 1G tmpfs file, populated with 64K large folios, and accessed it
> sequentially via mmap(). I observed a significant performance improvement:
>
> Before the patch:
> real 0m0.214s
> user 0m0.012s
> sys 0m0.203s
>
> After the patch:
> real 0m0.025s
> user 0m0.000s
> sys 0m0.024s
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
> mm/memory.c | 13 +++++++++----
> 1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 0f9b32a20e5b..6385a9385a9b 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5383,10 +5383,10 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
>
> /*
> * Using per-page fault to maintain the uffd semantics, and same
> - * approach also applies to non-anonymous-shmem faults to avoid
> + * approach also applies to non shmem/tmpfs faults to avoid
> * inflating the RSS of the process.
> */
> - if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
> + if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
> unlikely(needs_fallback)) {
> nr_pages = 1;
> } else if (nr_pages > 1) {
> @@ -5395,15 +5395,20 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
> pgoff_t vma_off = vmf->pgoff - vmf->vma->vm_pgoff;
> /* The index of the entry in the pagetable for fault page. */
> pgoff_t pte_off = pte_index(vmf->address);
> + unsigned long hpage_size = PAGE_SIZE << folio_order(folio);
>
> /*
> * Fallback to per-page fault in case the folio size in page
> - * cache beyond the VMA limits and PMD pagetable limits.
> + * cache beyond the VMA limits or PMD pagetable limits. And
> + * also check if the linear page offset of vma is order-aligned
> + * within the file for tmpfs.
> */
> if (unlikely(vma_off < idx ||
> vma_off + (nr_pages - idx) > vma_pages(vma) ||
> pte_off < idx ||
> - pte_off + (nr_pages - idx) > PTRS_PER_PTE)) {
> + pte_off + (nr_pages - idx) > PTRS_PER_PTE) ||
> + !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
> + hpage_size >> PAGE_SHIFT)) {
Again, why? Shouldn't set_pte_range() just do the right thing?
set_ptes() doesn't have any such restriction.
Also see the arm64 variant where we call
contpte_set_ptes(mm, addr, ptep, pte, nr);
There, I think we perform checks whether whether we can set the cont-pte
bit IIUC.
if (((addr | next | (pfn << PAGE_SHIFT)) & ~CONT_PTE_MASK) == 0)
pte = pte_mkcont(pte);
else
pte = pte_mknoncont(pte);
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists