[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250807130515.1445117-1-liuqiqi@kylinos.cn>
Date: Thu, 7 Aug 2025 21:05:15 +0800
From: liuqiqi@...inos.cn
To: gregkh@...uxfoundation.org
Cc: cve@...nel.org,
linux-cve-announce@...r.kernel.org,
linux-kernel@...r.kernel.org,
liuqiqi <liuqiqi@...inos.cn>
Subject: CVE-2024-57884 patch review feedback (https://lore.kernel.org/linux-cve-announce/2025011510-CVE-2024-57884-4cf8@...gkh/#R)
CVE-2024-57884 patch fixes mm: vmscan: account for free pages to prevent infinite Loop in throttle_direct_reclaim() modify as follows
@@ -342,7 +342,14 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
if (get_nr_swap_pages() > 0)
nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
-
+ /*
+ * If there are no reclaimable file-backed or anonymous pages,
+ * ensure zones with sufficient free pages are not skipped.
+ * This prevents zones like DMA32 from being ignored in reclaim
+ * scenarios where they can still help alleviate memory pressure.
+ */
+ if (nr == 0)
+ nr = zone_page_state_snapshot(zone, NR_FREE_PAGES);
return nr;
}
However, should_reclaim_retry() function calls zone_reclaimable_pages to count free pages. When nr is 0, it double-counts NR_FREE_PAGES. This seems to cause inaccurate page statistics, right?
static inline bool
should_reclaim_retry(gfp_t gfp_mask, unsigned order,
struct alloc_context *ac, int alloc_flags,
bool did_some_progress, int *no_progress_loops)
{
......
available = reclaimable = zone_reclaimable_pages(zone);
available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
/*
* Would the allocation succeed if we reclaimed all
* reclaimable pages?
*/
wmark = __zone_watermark_ok(zone, order, min_wmark,
ac->highest_zoneidx, alloc_flags, available);
compaction_zonelist_suitable() function has the same problem.
bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
int alloc_flags)
{
......
available = zone_reclaimable_pages(zone) / order;
available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
if (__compaction_suitable(zone, order, min_wmark_pages(zone),
ac->highest_zoneidx, available))
If this is problematic, can it be modified as follows:
diff --git a/mm/vmscan.c b/mm/vmscan.c
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6417,7 +6417,7 @@ static bool allow_direct_reclaim(pg_data_t *pgdat)
return true;
for_each_managed_zone_pgdat(zone, pgdat, i, ZONE_NORMAL) {
- if (!zone_reclaimable_pages(zone))
+ if (!zone_reclaimable_pages(zone) || !(zone_page_state_snapshot(zone, NR_FREE_PAGES)))
continue;
Signed-off-by: liuqiqi <liuqiqi@...inos.cn>
Powered by blists - more mailing lists