[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241020051315.356103-1-yuzhao@google.com>
Date: Sat, 19 Oct 2024 23:13:15 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Yu Zhao <yuzhao@...gle.com>, Link Lin <linkl@...gle.com>
Subject: [PATCH mm-unstable v1] mm/page_alloc: try not to overestimate free highatomic
OOM kills due to vastly overestimated free highatomic reserves were
observed:
... invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0 ...
Node 0 Normal free:1482936kB boost:0kB min:410416kB low:739404kB high:1068392kB reserved_highatomic:1073152KB ...
Node 0 Normal: 1292*4kB (ME) 1920*8kB (E) 383*16kB (UE) 220*32kB (ME) 340*64kB (E) 2155*128kB (UE) 3243*256kB (UE) 615*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 1477408kB
The second line above shows that the OOM kill was due to the following
condition:
free (1482936kB) - reserved_highatomic (1073152kB) = 409784KB < min (410416kB)
And the third line shows there were no free pages in any
MIGRATE_HIGHATOMIC pageblocks, which otherwise would show up as type
'H'. Therefore __zone_watermark_unusable_free() overestimated free
highatomic reserves. IOW, it underestimated the usable free memory by
over 1GB, which resulted in the unnecessary OOM kill.
The estimation can be made less crude, by quickly checking whether
there are free highatomic reserves at all. If not, then do not deduct
the entire highatomic reserves when calculating usable free memory.
Reported-by: Link Lin <linkl@...gle.com>
Signed-off-by: Yu Zhao <yuzhao@...gle.com>
---
mm/page_alloc.c | 25 ++++++++++++++++++++++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index bc55d39eb372..ee1ce19925ad 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3110,6 +3110,25 @@ struct page *rmqueue(struct zone *preferred_zone,
return page;
}
+static unsigned long get_max_free_highatomic(struct zone *zone)
+{
+ int order;
+ unsigned long free = 0;
+ unsigned long reserved = zone->nr_reserved_highatomic;
+
+ if (reserved <= pageblock_nr_pages)
+ return reserved;
+
+ for (order = 0; order <= MAX_PAGE_ORDER; order++) {
+ struct free_area *area = &zone->free_area[order];
+
+ if (!list_empty(&area->free_list[MIGRATE_HIGHATOMIC]))
+ free += READ_ONCE(area->nr_free) << order;
+ }
+
+ return min(reserved, free);
+}
+
static inline long __zone_watermark_unusable_free(struct zone *z,
unsigned int order, unsigned int alloc_flags)
{
@@ -3117,11 +3136,11 @@ static inline long __zone_watermark_unusable_free(struct zone *z,
/*
* If the caller does not have rights to reserves below the min
- * watermark then subtract the high-atomic reserves. This will
- * over-estimate the size of the atomic reserve but it avoids a search.
+ * watermark then subtract the high-atomic reserves. This can
+ * overestimate the size of free high-atomic reserves.
*/
if (likely(!(alloc_flags & ALLOC_RESERVES)))
- unusable_free += z->nr_reserved_highatomic;
+ unusable_free += get_max_free_highatomic(z);
#ifdef CONFIG_CMA
/* If allocation can't use CMA areas don't use free CMA pages */
--
2.47.0.rc1.288.g06298d1525-goog
Powered by blists - more mailing lists