[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260116205247.928004-1-yosry.ahmed@linux.dev>
Date: Fri, 16 Jan 2026 20:52:47 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Hildenbrand <david@...nel.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Qi Zheng <zhengqi.arch@...edance.com>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Davidlohr Bueso <dave@...olabs.net>,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Yosry Ahmed <yosry.ahmed@...ux.dev>,
stable@...r.kernel.org
Subject: [PATCH] mm: Restore per-memcg proactive reclaim with !CONFIG_NUMA
Commit 2b7226af730c ("mm/memcg: make memory.reclaim interface generic")
moved proactive reclaim logic from memory.reclaim handler to a generic
user_proactive_reclaim() helper to be used for per-node proactive
reclaim.
However, user_proactive_reclaim() was only defined under CONFIG_NUMA,
with a stub always returning 0 otherwise. This broke memory.reclaim on
!CONFIG_NUMA configs, causing it to report success without actually
attempting reclaim.
Move the definition of user_proactive_reclaim() outside CONFIG_NUMA, and
instead define a stub for __node_reclaim() in the !CONFIG_NUMA case.
__node_reclaim() is only called from user_proactive_reclaim() when a
write is made to sys/devices/system/node/nodeX/reclaim, which is only
defined with CONFIG_NUMA.
Fixes: 2b7226af730c ("mm/memcg: make memory.reclaim interface generic")
Cc: stable@...r.kernel.org
Signed-off-by: Yosry Ahmed <yosry.ahmed@...ux.dev>
---
mm/internal.h | 8 --------
mm/vmscan.c | 13 +++++++++++--
2 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 33eb0224f461..9508dbaf47cd 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -615,16 +615,8 @@ extern unsigned long highest_memmap_pfn;
bool folio_isolate_lru(struct folio *folio);
void folio_putback_lru(struct folio *folio);
extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason);
-#ifdef CONFIG_NUMA
int user_proactive_reclaim(char *buf,
struct mem_cgroup *memcg, pg_data_t *pgdat);
-#else
-static inline int user_proactive_reclaim(char *buf,
- struct mem_cgroup *memcg, pg_data_t *pgdat)
-{
- return 0;
-}
-#endif
/*
* in mm/rmap.c:
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7b28018ac995..d9918f24dea0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -7849,6 +7849,17 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
return ret;
}
+#else
+
+static unsigned long __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask,
+ unsigned long nr_pages,
+ struct scan_control *sc)
+{
+ return 0;
+}
+
+#endif
+
enum {
MEMORY_RECLAIM_SWAPPINESS = 0,
MEMORY_RECLAIM_SWAPPINESS_MAX,
@@ -7956,8 +7967,6 @@ int user_proactive_reclaim(char *buf,
return 0;
}
-#endif
-
/**
* check_move_unevictable_folios - Move evictable folios to appropriate zone
* lru list
--
2.52.0.457.g6b5491de43-goog
Powered by blists - more mailing lists