[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <70d1dedc-b4fc-4eb6-baf6-9e54b6a62249@lucifer.local>
Date: Mon, 15 Dec 2025 12:38:30 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: akpm@...ux-foundation.org, david@...nel.org, catalin.marinas@....com,
will@...nel.org, ryan.roberts@....com, Liam.Howlett@...cle.com,
vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com, mhocko@...e.com,
riel@...riel.com, harry.yoo@...cle.com, jannh@...gle.com,
willy@...radead.org, baohua@...nel.org, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] mm: rmap: support batched unmapping for file
large folios
On Thu, Dec 11, 2025 at 04:16:56PM +0800, Baolin Wang wrote:
> Similar to folio_referenced_one(), we can apply batched unmapping for file
> large folios to optimize the performance of file folios reclamation.
>
> Performance testing:
> Allocate 10G clean file-backed folios by mmap() in a memory cgroup, and try to
> reclaim 8G file-backed folios via the memory.reclaim interface. I can observe
> 75% performance improvement on my Arm64 32-core server.
Again, you must test on non-arm64 architectures and report the numbers for this
also.
>
> W/o patch:
> real 0m1.018s
> user 0m0.000s
> sys 0m1.018s
>
> W/ patch:
> real 0m0.249s
> user 0m0.000s
> sys 0m0.249s
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
> mm/rmap.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index ec232165c47d..4c9d5777c8da 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1855,9 +1855,10 @@ static inline unsigned int folio_unmap_pte_batch(struct folio *folio,
> end_addr = pmd_addr_end(addr, vma->vm_end);
> max_nr = (end_addr - addr) >> PAGE_SHIFT;
>
> - /* We only support lazyfree batching for now ... */
> - if (!folio_test_anon(folio) || folio_test_swapbacked(folio))
> + /* We only support lazyfree or file folios batching for now ... */
> + if (folio_test_anon(folio) && folio_test_swapbacked(folio))
Why is it now ok to support file-backed batched unmapping when it wasn't in
Barry's series (see [0])? You don't seem to be justifying this?
[0]:https://lore.kernel.org/all/20250214093015.51024-4-21cnbao@gmail.com/T/#u
> return 1;
> +
> if (pte_unused(pte))
> return 1;
>
> @@ -2223,7 +2224,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> *
> * See Documentation/mm/mmu_notifier.rst
> */
> - dec_mm_counter(mm, mm_counter_file(folio));
> + add_mm_counter(mm, mm_counter_file(folio), -nr_pages);
Was this just a bug before?
> }
> discard:
> if (unlikely(folio_test_hugetlb(folio))) {
> --
> 2.47.3
>
Thanks, Lorenzo
Powered by blists - more mailing lists