[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <282545E0-5B66-492D-B63F-838C6F066A22@nvidia.com>
Date: Tue, 08 Apr 2025 11:29:43 -0400
From: Zi Yan <ziy@...dia.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: akpm@...ux-foundation.org, willy@...radead.org, david@...hat.com,
21cnbao@...il.com, ryan.roberts@....com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Johannes Weiner <hannes@...xchg.org>
Subject: Re: [RFC PATCH] mm: huge_memory: add folio_mark_accessed() when
zapping file THP
On 8 Apr 2025, at 9:16, Baolin Wang wrote:
> When investigating performance issues during file folio unmap, I noticed some
> behavioral differences in handling non-PMD-sized folios and PMD-sized folios.
> For non-PMD-sized file folios, it will call folio_mark_accessed() to mark the
> folio as having seen activity, but this is not done for PMD-sized folios.
>
> This might not cause obvious issues, but a potential problem could be that,
> it might lead to more frequent refaults of PMD-sized file folios under memory
> pressure. Therefore, I am unsure whether the folio_mark_accessed() should be
How likely will the system get PMD-sized file folios when it is under
memory pressure? Johannes’ recent patch increases THP allocation successful
rate, maybe it was not happening before but will be after the patch?
> added for PMD-sized file folios?
Do you see any performance change after your patch?
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
> mm/huge_memory.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 6ac6d468af0d..b3ade7ac5bbf 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2262,6 +2262,10 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> zap_deposited_table(tlb->mm, pmd);
> add_mm_counter(tlb->mm, mm_counter_file(folio),
> -HPAGE_PMD_NR);
> +
> + if (flush_needed && pmd_young(orig_pmd) &&
> + likely(vma_has_recency(vma)))
> + folio_mark_accessed(folio);
> }
>
> spin_unlock(ptl);
> --
> 2.43.5
Best Regards,
Yan, Zi
Powered by blists - more mailing lists