[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9c0be55d-0b02-4f39-a871-3ce495e32a87@huaweicloud.com>
Date: Thu, 25 Dec 2025 11:45:16 +0800
From: Chen Ridong <chenridong@...weicloud.com>
To: Qi Zheng <qi.zheng@...ux.dev>, hannes@...xchg.org, hughd@...gle.com,
mhocko@...e.com, roman.gushchin@...ux.dev, shakeel.butt@...ux.dev,
muchun.song@...ux.dev, david@...nel.org, lorenzo.stoakes@...cle.com,
ziy@...dia.com, harry.yoo@...cle.com, imran.f.khan@...cle.com,
kamalesh.babulal@...cle.com, axelrasmussen@...gle.com, yuanchu@...gle.com,
weixugc@...gle.com, mkoutny@...e.com, akpm@...ux-foundation.org,
hamzamahfooz@...ux.microsoft.com, apais@...ux.microsoft.com,
lance.yang@...ux.dev
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, Qi Zheng <zhengqi.arch@...edance.com>
Subject: Re: [PATCH v2 04/28] mm: vmscan: prepare for the refactoring the
move_folios_to_lru()
On 2025/12/17 15:27, Qi Zheng wrote:
> From: Qi Zheng <zhengqi.arch@...edance.com>
>
> After refactoring the move_folios_to_lru(), its caller no longer needs to
> hold the lruvec lock, the disabling IRQ is only for __count_vm_events()
> and __mod_node_page_state().
>
nit:
For shrink_inactive_list(), shrink_active_list() and evict_folios(), IRQ disabling is only needed
for __count_vm_events() and __mod_node_page_state().
I think it can be clearer.
> On the PREEMPT_RT kernel, the local_irq_disable() cannot be used. To
> avoid using local_irq_disable() and reduce the critical section of
> disabling IRQ, make all callers of move_folios_to_lru() use IRQ-safed
> count_vm_events() and mod_node_page_state().
>
> Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
> ---
> mm/vmscan.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 28d9b3af47130..49e5661746213 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2021,12 +2021,12 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
>
> mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(sc),
> stat.nr_demoted);
> - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
> + mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
> item = PGSTEAL_KSWAPD + reclaimer_offset(sc);
> if (!cgroup_reclaim(sc))
> - __count_vm_events(item, nr_reclaimed);
> + count_vm_events(item, nr_reclaimed);
> count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
> - __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
> + count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
>
> lru_note_cost_unlock_irq(lruvec, file, stat.nr_pageout,
> nr_scanned - nr_reclaimed);
> @@ -2171,10 +2171,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
> nr_activate = move_folios_to_lru(lruvec, &l_active);
> nr_deactivate = move_folios_to_lru(lruvec, &l_inactive);
>
> - __count_vm_events(PGDEACTIVATE, nr_deactivate);
> + count_vm_events(PGDEACTIVATE, nr_deactivate);
> count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
>
> - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
> + mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
>
> lru_note_cost_unlock_irq(lruvec, file, 0, nr_rotated);
> trace_mm_vmscan_lru_shrink_active(pgdat->node_id, nr_taken, nr_activate,
> @@ -4751,9 +4751,9 @@ static int evict_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
>
> item = PGSTEAL_KSWAPD + reclaimer_offset(sc);
> if (!cgroup_reclaim(sc))
> - __count_vm_events(item, reclaimed);
> + count_vm_events(item, reclaimed);
> count_memcg_events(memcg, item, reclaimed);
> - __count_vm_events(PGSTEAL_ANON + type, reclaimed);
> + count_vm_events(PGSTEAL_ANON + type, reclaimed);
>
> spin_unlock_irq(&lruvec->lru_lock);
>
Reviewed-by: Chen Ridong <chenridong@...wei.com>
--
Best regards,
Ridong
Powered by blists - more mailing lists