[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251015175041.40408-1-manish1588@gmail.com>
Date: Wed, 15 Oct 2025 23:20:41 +0530
From: Manish Kumar <manish1588@...il.com>
To: akpm@...ux-foundation.org
Cc: vbabka@...e.cz,
surenb@...gle.com,
mhocko@...e.com,
jackmanb@...gle.com,
hannes@...xchg.org,
ziy@...dia.com,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Manish Kumar <manish1588@...il.com>
Subject: [PATCH] mm/page_isolation: clarify FIXME around shrink_slab() in memory hotplug
The existing FIXME comment notes that memory hotplug doesn't invoke
shrink_slab() directly. This patch adds context explaining that this is
an intentional design choice to avoid recursion or deadlocks in the
memory reclaim path, as slab shrinking is handled by vmscan.
Signed-off-by: Manish Kumar <manish1588@...il.com>
---
mm/page_isolation.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index b2fc5266e3d2..2ca20c3f0a97 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -176,10 +176,16 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
/*
* FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
- * We just check MOVABLE pages.
+ *
+ * This is an intentional limitation: invoking shrink_slab() from a
+ * hotplug path can cause reclaim recursion or deadlock if the normal
+ * memory reclaim (vmscan) path is already active. Slab shrinking is
+ * handled by the vmscan reclaim code under normal operation, so hotplug
+ * avoids direct calls into shrink_slab() to prevent reentrancy issues.
+ *
+ * We therefore only check MOVABLE pages here.
*
* Pass the intersection of [start_pfn, end_pfn) and the page's pageblock
- * to avoid redundant checks.
*/
check_unmovable_start = max(page_to_pfn(page), start_pfn);
check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
--
2.43.0
Powered by blists - more mailing lists