[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65187d0371e692e52f14ed7b80cf95e8f15d7a7d.1768389889.git.zhengqi.arch@bytedance.com>
Date: Wed, 14 Jan 2026 19:26:47 +0800
From: Qi Zheng <qi.zheng@...ux.dev>
To: hannes@...xchg.org,
hughd@...gle.com,
mhocko@...e.com,
roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev,
muchun.song@...ux.dev,
david@...nel.org,
lorenzo.stoakes@...cle.com,
ziy@...dia.com,
harry.yoo@...cle.com,
yosry.ahmed@...ux.dev,
imran.f.khan@...cle.com,
kamalesh.babulal@...cle.com,
axelrasmussen@...gle.com,
yuanchu@...gle.com,
weixugc@...gle.com,
chenridong@...weicloud.com,
mkoutny@...e.com,
akpm@...ux-foundation.org,
hamzamahfooz@...ux.microsoft.com,
apais@...ux.microsoft.com,
lance.yang@...ux.dev
Cc: linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org,
Qi Zheng <zhengqi.arch@...edance.com>,
Chen Ridong <chenridong@...wei.com>
Subject: [PATCH v3 04/30] mm: vmscan: prepare for the refactoring the move_folios_to_lru()
From: Qi Zheng <zhengqi.arch@...edance.com>
Once we refactor move_folios_to_lru(), its callers will no longer have to
hold the lruvec lock; For shrink_inactive_list(), shrink_active_list() and
evict_folios(), IRQ disabling is only needed for __count_vm_events() and
__mod_node_page_state().
To avoid using local_irq_disable() on the PREEMPT_RT kernel, let's make
all callers of move_folios_to_lru() use IRQ-safed count_vm_events() and
mod_node_page_state().
Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
Acked-by: Johannes Weiner <hannes@...xchg.org>
Acked-by: Shakeel Butt <shakeel.butt@...ux.dev>
Reviewed-by: Chen Ridong <chenridong@...wei.com>
---
mm/vmscan.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1ede4f23b9a6f..5c59c275c4463 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2045,12 +2045,12 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(sc),
stat.nr_demoted);
- __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
+ mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
item = PGSTEAL_KSWAPD + reclaimer_offset(sc);
if (!cgroup_reclaim(sc))
- __count_vm_events(item, nr_reclaimed);
+ count_vm_events(item, nr_reclaimed);
count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
- __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
+ count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
lru_note_cost_unlock_irq(lruvec, file, stat.nr_pageout,
nr_scanned - nr_reclaimed);
@@ -2195,10 +2195,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
nr_activate = move_folios_to_lru(lruvec, &l_active);
nr_deactivate = move_folios_to_lru(lruvec, &l_inactive);
- __count_vm_events(PGDEACTIVATE, nr_deactivate);
+ count_vm_events(PGDEACTIVATE, nr_deactivate);
count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
- __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
+ mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
lru_note_cost_unlock_irq(lruvec, file, 0, nr_rotated);
trace_mm_vmscan_lru_shrink_active(pgdat->node_id, nr_taken, nr_activate,
@@ -4788,9 +4788,9 @@ static int evict_folios(unsigned long nr_to_scan, struct lruvec *lruvec,
item = PGSTEAL_KSWAPD + reclaimer_offset(sc);
if (!cgroup_reclaim(sc))
- __count_vm_events(item, reclaimed);
+ count_vm_events(item, reclaimed);
count_memcg_events(memcg, item, reclaimed);
- __count_vm_events(PGSTEAL_ANON + type, reclaimed);
+ count_vm_events(PGSTEAL_ANON + type, reclaimed);
spin_unlock_irq(&lruvec->lru_lock);
--
2.20.1
Powered by blists - more mailing lists