[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f3a8fe43c0d8572ad942354cf649c9f25ae762bc.1770279888.git.zhengqi.arch@bytedance.com>
Date: Thu, 5 Feb 2026 17:01:47 +0800
From: Qi Zheng <qi.zheng@...ux.dev>
To: hannes@...xchg.org,
hughd@...gle.com,
mhocko@...e.com,
roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev,
muchun.song@...ux.dev,
david@...nel.org,
lorenzo.stoakes@...cle.com,
ziy@...dia.com,
harry.yoo@...cle.com,
yosry.ahmed@...ux.dev,
imran.f.khan@...cle.com,
kamalesh.babulal@...cle.com,
axelrasmussen@...gle.com,
yuanchu@...gle.com,
weixugc@...gle.com,
chenridong@...weicloud.com,
mkoutny@...e.com,
akpm@...ux-foundation.org,
hamzamahfooz@...ux.microsoft.com,
apais@...ux.microsoft.com,
lance.yang@...ux.dev,
bhe@...hat.com
Cc: linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org,
Qi Zheng <zhengqi.arch@...edance.com>
Subject: [PATCH v4 28/31] mm: workingset: use lruvec_lru_size() to get the number of lru pages
From: Qi Zheng <zhengqi.arch@...edance.com>
For cgroup v2, count_shadow_nodes() is the only place to read
non-hierarchical stats (lruvec_stats->state_local). To avoid the need to
consider cgroup v2 during subsequent non-hierarchical stats reparenting,
use lruvec_lru_size() instead of lruvec_page_state_local() to get the
number of lru pages.
For NR_SLAB_RECLAIMABLE_B and NR_SLAB_UNRECLAIMABLE_B cases, it appears
that the statistics here have already been problematic for a while since
slab pages have been reparented. So just ignore it for now.
Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
---
include/linux/swap.h | 1 +
mm/vmscan.c | 3 +--
mm/workingset.c | 5 +++--
3 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 62e124ec6b75a..2ee736438b9d6 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -370,6 +370,7 @@ extern void swap_setup(void);
extern unsigned long zone_reclaimable_pages(struct zone *zone);
extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
gfp_t gfp_mask, nodemask_t *mask);
+unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx);
#define MEMCG_RECLAIM_MAY_SWAP (1 << 1)
#define MEMCG_RECLAIM_PROACTIVE (1 << 2)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8c6f8f0df24b1..4dc52b4b4af50 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -390,8 +390,7 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
* @lru: lru to use
* @zone_idx: zones to consider (use MAX_NR_ZONES - 1 for the whole LRU list)
*/
-static unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru,
- int zone_idx)
+unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
{
unsigned long size = 0;
int zid;
diff --git a/mm/workingset.c b/mm/workingset.c
index 2be53098f6282..ecdc293cd06fb 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -684,9 +684,10 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
mem_cgroup_flush_stats_ratelimited(sc->memcg);
lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
+
for (pages = 0, i = 0; i < NR_LRU_LISTS; i++)
- pages += lruvec_page_state_local(lruvec,
- NR_LRU_BASE + i);
+ pages += lruvec_lru_size(lruvec, i, MAX_NR_ZONES - 1);
+
pages += lruvec_page_state_local(
lruvec, NR_SLAB_RECLAIMABLE_B) >> PAGE_SHIFT;
pages += lruvec_page_state_local(
--
2.20.1
Powered by blists - more mailing lists