lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250925181106.3924a90c@fangorn>
Date: Thu, 25 Sep 2025 18:11:06 -0400
From: Rik van Riel <riel@...riel.com>
To: Frank van der Linden <fvdl@...gle.com>
Cc: akpm@...ux-foundation.org, muchun.song@...ux.dev, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, hannes@...xchg.org, david@...hat.com,
 roman.gushchin@...ux.dev, kernel-team@...a.com
Subject: [RFC PATCH 13/12] mm,cma: add compaction cma balance helper for
 direct reclaim

On Mon, 15 Sep 2025 19:51:41 +0000
Frank van der Linden <fvdl@...gle.com> wrote:

> This is an RFC on a solution to the long standing problem of OOMs
> occuring when the kernel runs out of space for unmovable allocations
> in the face of large amounts of CMA.

In order to make the CMA balancing code useful without hugetlb involvement,
eg. when simply allocating a !__GFP_MOVABLE allocation, I added two
patches to invoke CMA balancing from the page reclaim code when needed.

With these changes, we might no longer need to call the CMA balancing
code from the hugetlb free path any more, and could potentially
simplify some things in that area.

---8<---
From 99991606760fdf8399255d7fc1f21b58069a4afe Mon Sep 17 00:00:00 2001
From: Rik van Riel <riel@...a.com>
Date: Tue, 23 Sep 2025 10:01:42 -0700
Subject: [PATCH 2/3] mm,cma: add compaction cma balance helper for direct reclaim

Add a cma balance helper for the direct reclaim code, which does not
balance CMA free memory all the way, but only a limited number of
pages.

Signed-off-by: Rik van Riel <riel@...riel.com>
---
 mm/compaction.c | 20 ++++++++++++++++++--
 mm/internal.h   |  7 +++++++
 2 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 3200119b8baf..90478c29db60 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2541,7 +2541,7 @@ isolate_free_cma_pages(struct compact_control *cc)
 	cc->free_pfn = next_pfn;
 }
 
-static void balance_zone_cma(struct zone *zone, struct cma *cma)
+static void balance_zone_cma(struct zone *zone, struct cma *cma, int target)
 {
 	struct compact_control cc = {
 		.zone = zone,
@@ -2613,6 +2613,13 @@ static void balance_zone_cma(struct zone *zone, struct cma *cma)
 		nr_pages = min(nr_pages, cma_get_available(cma));
 	nr_pages = min(allocated_noncma, nr_pages);
 
+	/*
+	 * When invoked from page reclaim, use the provided target rather
+	 * than the calculated one.
+	 */
+	if (target)
+		nr_pages = target;
+
 	for (order = 0; order < NR_PAGE_ORDERS; order++)
 		INIT_LIST_HEAD(&cc.freepages[order]);
 	INIT_LIST_HEAD(&cc.migratepages);
@@ -2674,10 +2681,19 @@ void balance_node_cma(int nid, struct cma *cma)
 		if (!populated_zone(zone))
 			continue;
 
-		balance_zone_cma(zone, cma);
+		balance_zone_cma(zone, cma, 0);
 	}
 }
 
+void balance_cma_zonelist(struct zonelist *zonelist, int nr_pages)
+{
+	struct zoneref *z;
+	struct zone *zone;
+
+	for_each_zone_zonelist(zone, z, zonelist, MAX_NR_ZONES - 1)
+		balance_zone_cma(zone, NULL, nr_pages);
+}
+
 #endif /* CONFIG_CMA */
 
 static enum compact_result
diff --git a/mm/internal.h b/mm/internal.h
index 7dcaf7214683..5340b94683bf 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -942,6 +942,7 @@ struct cma;
 void *cma_reserve_early(struct cma *cma, unsigned long size);
 void init_cma_pageblock(struct page *page);
 void balance_node_cma(int nid, struct cma *cma);
+void balance_cma_zonelist(struct zonelist *zonelist, int nr_pages);
 #else
 static inline void *cma_reserve_early(struct cma *cma, unsigned long size)
 {
@@ -950,6 +951,12 @@ static inline void *cma_reserve_early(struct cma *cma, unsigned long size)
 static inline void init_cma_pageblock(struct page *page)
 {
 }
+static inline void balance_node_cma(int nid, struct cma *cma)
+{
+}
+static inline void balance_cma_zonelist(struct zonelist *zonelist, int nr_pages)
+{
+}
 #endif
 
 
-- 
2.47.3


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ