lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a33fe3e-b0dd-4553-95b4-89619b9229d2@arm.com>
Date: Fri, 30 Jan 2026 13:40:54 +0530
From: Dev Jain <dev.jain@....com>
To: Shakeel Butt <shakeel.butt@...ux.dev>,
 Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Rik van Riel <riel@...riel.com>,
 Song Liu <songliubraving@...com>, Kiryl Shutsemau <kas@...nel.org>,
 Usama Arif <usamaarif642@...il.com>, David Hildenbrand <david@...nel.org>,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Zi Yan <ziy@...dia.com>,
 Baolin Wang <baolin.wang@...ux.alibaba.com>,
 "Liam R . Howlett" <Liam.Howlett@...cle.com>, Nico Pache
 <npache@...hat.com>, Ryan Roberts <ryan.roberts@....com>,
 Barry Song <baohua@...nel.org>, Lance Yang <lance.yang@...ux.dev>,
 Matthew Wilcox <willy@...radead.org>, Meta kernel team
 <kernel-team@...a.com>, linux-mm@...ck.org, cgroups@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm: khugepaged: fix NR_FILE_PAGES and NR_SHMEM in
 collapse_file()


On 30/01/26 9:59 am, Shakeel Butt wrote:
> In META's fleet, we observed high-level cgroups showing zero file memcg
> stats while their descendants had non-zero values. Investigation using
> drgn revealed that these parent cgroups actually had negative file stats,
> aggregated from their children.
>
> This issue became more frequent after deploying thp-always more widely,
> pointing to a correlation with THP file collapsing. The root cause is
> that collapse_file() assumes old folios and the new THP belong to the
> same node and memcg. When this assumption breaks, stats become skewed.
> The bug affects not just memcg stats but also per-numa stats, and not
> just NR_FILE_PAGES but also NR_SHMEM.
>
> The assumption breaks in scenarios such as:
>
> 1. Small folios allocated on one node while the THP gets allocated on a
>    different node.
>
> 2. A package downloader running in one cgroup populates the page cache,
>    while a job in a different cgroup executes the downloaded binary.
>
> 3. A file shared between processes in different cgroups, where one
>    process faults in the pages and khugepaged (or madvise(COLLAPSE))
>    collapses them on behalf of the other.
>
> Fix the accounting by explicitly incrementing stats for the new THP and
> decrementing stats for the old folios being replaced.
>
> Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages")
> Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
> ---

Thanks.

Reviewed-by: Dev Jain <dev.jain@....com>

>  mm/khugepaged.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 1d994b6c58c6..fa1e57fd2c46 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2195,16 +2195,13 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
>  		xas_lock_irq(&xas);
>  	}
>  
> -	if (is_shmem)
> +	if (is_shmem) {
> +		lruvec_stat_mod_folio(new_folio, NR_SHMEM, HPAGE_PMD_NR);
>  		lruvec_stat_mod_folio(new_folio, NR_SHMEM_THPS, HPAGE_PMD_NR);
> -	else
> +	} else {
>  		lruvec_stat_mod_folio(new_folio, NR_FILE_THPS, HPAGE_PMD_NR);
> -
> -	if (nr_none) {
> -		lruvec_stat_mod_folio(new_folio, NR_FILE_PAGES, nr_none);
> -		/* nr_none is always 0 for non-shmem. */
> -		lruvec_stat_mod_folio(new_folio, NR_SHMEM, nr_none);
>  	}
> +	lruvec_stat_mod_folio(new_folio, NR_FILE_PAGES, HPAGE_PMD_NR);
>  
>  	/*
>  	 * Mark new_folio as uptodate before inserting it into the
> @@ -2238,6 +2235,11 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
>  	 */
>  	list_for_each_entry_safe(folio, tmp, &pagelist, lru) {
>  		list_del(&folio->lru);
> +		lruvec_stat_mod_folio(folio, NR_FILE_PAGES,
> +				      -folio_nr_pages(folio));
> +		if (is_shmem)
> +			lruvec_stat_mod_folio(folio, NR_SHMEM,
> +					      -folio_nr_pages(folio));

I notice here that we don't need to do accounting for NR_SHMEM_THPS or NR_FILE_THPS -
but the following bit:

if (folio_order(folio) == HPAGE_PMD_ORDER && folio->index == start)

in the khugepaged code, seems to suggest that we can reach this stat accounting path
with a PMD order old folio, if folio->index != start. But this condition should not be possible;
a folio is always order-aligned within the file, which means the folio->index here
is PMD-aligned. The entry of collapse_file() asserts that start is also PMD-aligned (guaranteed
by thp_vma_allowable_order in khugepaged_scan_mm_slot). Therefore start must equal folio->index.

If I am not missing something here, I'll send a patch to convert this to a VM_WARN_ON.
 

>  		folio->mapping = NULL;
>  		folio_clear_active(folio);
>  		folio_clear_unevictable(folio);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ