lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20250807125413.1443050-1-liuqiqi@kylinos.cn>
Date: Thu,  7 Aug 2025 20:54:13 +0800
From: liuqiqi@...inos.cn
To: gregkh@...uxfoundation.org
Cc: cve@...nel.org,
	linux-cve-announce@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	liuqiqi <liuqiqi@...inos.cn>
Subject: CVE-2024-57884  patch review feedback (https://lore.kernel.org/linux-cve-announce/2025011510-CVE-2024-57884-4cf8@...gkh/#R)

		if (cpusets_enabled() &&
			(alloc_flags & ALLOC_CPUSET) &&
			!__cpuset_zone_allowed(zone, gfp_mask))
				continue;

		available = reclaimable = zone_reclaimable_pages(zone);
		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);

		/*
		 * Would the allocation succeed if we reclaimed all
		 * reclaimable pages?
		 */
		wmark = __zone_watermark_ok(zone, order, min_wmark,
				ac->highest_zoneidx, alloc_flags, available);

compaction_zonelist_suitable() function has the same problem.
bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
		int alloc_flags)
{
	struct zone *zone;
	struct zoneref *z;

	/*
	 * Make sure at least one zone would pass __compaction_suitable if we continue
	 * retrying the reclaim.
	 */
	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
				ac->highest_zoneidx, ac->nodemask) {
		unsigned long available;

		/*
		 * Do not consider all the reclaimable memory because we do not
		 * want to trash just for a single high order allocation which
		 * is even not guaranteed to appear even if __compaction_suitable
		 * is happy about the watermark check.
		 */
		available = zone_reclaimable_pages(zone) / order;
		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
		if (__compaction_suitable(zone, order, min_wmark_pages(zone),
					  ac->highest_zoneidx, available))

If this is problematic, can it be modified as follows:
diff --git a/mm/vmscan.c b/mm/vmscan.c
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6417,7 +6417,7 @@ static bool allow_direct_reclaim(pg_data_t *pgdat)
                return true;
 
        for_each_managed_zone_pgdat(zone, pgdat, i, ZONE_NORMAL) {
-               if (!zone_reclaimable_pages(zone))
+               if (!zone_reclaimable_pages(zone) || !(zone_page_state_snapshot(zone, NR_FREE_PAGES)))
                        continue;

Signed-off-by: liuqiqi <liuqiqi@...inos.cn>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ