lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <2025080744-buckskin-triumph-2f79@gregkh>
Date: Thu, 7 Aug 2025 15:24:09 +0100
From: Greg KH <gregkh@...uxfoundation.org>
To: liuqiqi@...inos.cn
Cc: cve@...nel.org, linux-cve-announce@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: CVE-2024-57884  patch review feedback
 (https://lore.kernel.org/linux-cve-announce/2025011510-CVE-2024-57884-4cf8@...gkh/#R)

On Thu, Aug 07, 2025 at 09:05:15PM +0800, liuqiqi@...inos.cn wrote:
> CVE-2024-57884  patch fixes  mm: vmscan: account for free pages to prevent infinite Loop in throttle_direct_reclaim() modify as follows
> @@ -342,7 +342,14 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
>  	if (get_nr_swap_pages() > 0)
>  		nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
>  			zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
> -
> +	/*
> +	 * If there are no reclaimable file-backed or anonymous pages,
> +	 * ensure zones with sufficient free pages are not skipped.
> +	 * This prevents zones like DMA32 from being ignored in reclaim
> +	 * scenarios where they can still help alleviate memory pressure.
> +	 */
> +	if (nr == 0)
> +		nr = zone_page_state_snapshot(zone, NR_FREE_PAGES);
>  	return nr;
>  }
> However, should_reclaim_retry() function calls zone_reclaimable_pages to count free pages. When nr is 0, it double-counts NR_FREE_PAGES. This seems to cause inaccurate page statistics, right?
> static inline bool
> should_reclaim_retry(gfp_t gfp_mask, unsigned order,
> 		     struct alloc_context *ac, int alloc_flags,
> 		     bool did_some_progress, int *no_progress_loops)
> {
> ......
> 
> 		available = reclaimable = zone_reclaimable_pages(zone);
> 		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> 
> 		/*
> 		 * Would the allocation succeed if we reclaimed all
> 		 * reclaimable pages?
> 		 */
> 		wmark = __zone_watermark_ok(zone, order, min_wmark,
> 				ac->highest_zoneidx, alloc_flags, available);
> 
> compaction_zonelist_suitable() function has the same problem.
> bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
> 		int alloc_flags)
> {
> ......
> 		available = zone_reclaimable_pages(zone) / order;
> 		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> 		if (__compaction_suitable(zone, order, min_wmark_pages(zone),
> 					  ac->highest_zoneidx, available))
> 
> If this is problematic, can it be modified as follows:
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -6417,7 +6417,7 @@ static bool allow_direct_reclaim(pg_data_t *pgdat)
>                 return true;
>  
>         for_each_managed_zone_pgdat(zone, pgdat, i, ZONE_NORMAL) {
> -               if (!zone_reclaimable_pages(zone))
> +               if (!zone_reclaimable_pages(zone) || !(zone_page_state_snapshot(zone, NR_FREE_PAGES)))
>                         continue;
> 
> Signed-off-by: liuqiqi <liuqiqi@...inos.cn>

I have no idea what you are asking about or wishing to see change.
Please read the kernel documentation for how to send a proper patch.

thanks,

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ