[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250122200545.43513-1-sj@kernel.org>
Date: Wed, 22 Jan 2025 12:05:45 -0800
From: SeongJae Park <sj@...nel.org>
To: Vinay Banakar <vny@...gle.com>
Cc: SeongJae Park <sj@...nel.org>,
Bharata B Rao <bharata@....com>,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org,
willy@...radead.org,
mgorman@...e.de,
Wei Xu <weixugc@...gle.com>,
Greg Thelen <gthelen@...gle.com>
Subject: Re: [PATCH] mm: Optimize TLB flushes during page reclaim
On Wed, 22 Jan 2025 07:28:56 -0600 Vinay Banakar <vny@...gle.com> wrote:
> On Wed, Jan 22, 2025 at 2:59 AM Bharata B Rao <bharata@....com> wrote:
> > While that may be true for MADV_PAGEOUT path, does the same assumption
> > hold good for other paths from which shrink_folio_list() gets called?
>
> shrink_folio_list() is called by three other functions, each with
> different batching behavior:
> - reclaim_clean_pages_from_list(): Doesn't do PMD batching but only
> processes clean pages, so it won't take the path affected by this
> patch. This is called from the contiguous memory allocator
> (cma_alloc#alloc_contig_range)
> - shrink_inactive_list(): Reclaims inactive pages at SWAP_CLUSTER_MAX
> (default 32) at a time. With this patch, we will reduce IPIs for TLB
> flushes by a factor of 32 in kswapd
> - evict_folios(): In the MGLRU case, the number of pages
> shrink_folio_list() processes can vary between 64 (MIN_LRU_BATCH) and
> 4096 (MAX_LRU_BATCH). The reduction in IPIs will vary accordingly
damon_pa_pageout() from mm/damon/paddr.c also calls shrink_folio_list() similar
to madvise.c, but it doesn't aware such batching behavior. Have you checked
that path?
Thanks,
SJ
>
> Thanks!
> Vinay
Powered by blists - more mailing lists