lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65f4dd0b-2bc2-4345-86c2-630a91fcfa39@linux.alibaba.com>
Date: Wed, 22 Oct 2025 09:25:09 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Kairui Song <ryncsn@...il.com>, linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>, Hugh Dickins
 <hughd@...gle.com>, Dev Jain <dev.jain@....com>,
 David Hildenbrand <david@...hat.com>, Barry Song <baohua@...nel.org>,
 Liam Howlett <liam.howlett@...cle.com>,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 Mariano Pache <npache@...hat.com>, Matthew Wilcox <willy@...radead.org>,
 Ryan Roberts <ryan.roberts@....com>, Zi Yan <ziy@...dia.com>,
 linux-kernel@...r.kernel.org, Kairui Song <kasong@...cent.com>,
 stable@...r.kernel.org
Subject: Re: [PATCH] mm/shmem: fix THP allocation size check and fallback



On 2025/10/22 03:04, Kairui Song wrote:
> From: Kairui Song <kasong@...cent.com>
> 
> There are some problems with the code implementations of THP fallback.
> suitable_orders could be zero, and calling highest_order on a zero value
> returns an overflowed size. And the order check loop is updating the
> index value on every loop which may cause the index to be aligned by a
> larger value while the loop shrinks the order. 

No, this is not true. Although ‘suitable_orders’ might be 0, it will not 
enter the ‘while (suitable_orders)’ loop, and ‘order’ will not be used 
(this is also how the highest_order() function is used in other places).

> And it forgot to try order
> 0 after the final loop.

This is also not true. We will fallback to order 0 allocation in 
shmem_get_folio_gfp() if large order allocation fails.

> This is usually fine because shmem_add_to_page_cache ensures the shmem
> mapping is still sane, but it might cause many potential issues like
> allocating random folios into the random position in the map or return
> -ENOMEM by accident. This triggered some strange userspace errors [1],
> and shouldn't have happened in the first place.

I tested tmpfs with swap on ZRAM in the mm-new branch, and no issues 
were encountered. However, it is worth mentioning that, after the commit 
69e0a3b49003 ("mm: shmem: fix the strategy for the tmpfs 'huge=' 
options"), tmpfs may consume more memory (because we no longer allocate 
large folios based on the write size), which might cause your test case 
to run out of memory (OOM) and trigger some errors. You may need to 
adjust your swap size or memcg limit.

> Cc: stable@...r.kernel.org
> Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1]
> Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Kairui Song <kasong@...cent.com>
> ---
>   mm/shmem.c | 26 +++++++++++++++-----------
>   1 file changed, 15 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index b50ce7dbc84a..25303711f123 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1824,6 +1824,9 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
>   	unsigned long pages;
>   	int order;
>   
> +	if (!orders)
> +		return 0;
> +
>   	if (vma) {
>   		orders = thp_vma_suitable_orders(vma, vmf->address, orders);
>   		if (!orders)
> @@ -1888,27 +1891,28 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>   	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
>   		orders = 0;
>   
> -	if (orders > 0) {
> -		suitable_orders = shmem_suitable_orders(inode, vmf,
> -							mapping, index, orders);
> +	suitable_orders = shmem_suitable_orders(inode, vmf,
> +						mapping, index, orders);
>   
> +	if (suitable_orders) {
>   		order = highest_order(suitable_orders);
> -		while (suitable_orders) {
> +		do {
>   			pages = 1UL << order;
> -			index = round_down(index, pages);
> -			folio = shmem_alloc_folio(gfp, order, info, index);
> -			if (folio)
> +			folio = shmem_alloc_folio(gfp, order, info, round_down(index, pages));
> +			if (folio) {
> +				index = round_down(index, pages);
>   				goto allocated;
> +			}
>   
>   			if (pages == HPAGE_PMD_NR)
>   				count_vm_event(THP_FILE_FALLBACK);
>   			count_mthp_stat(order, MTHP_STAT_SHMEM_FALLBACK);
>   			order = next_order(&suitable_orders, order);
> -		}
> -	} else {
> -		pages = 1;
> -		folio = shmem_alloc_folio(gfp, 0, info, index);
> +		} while (suitable_orders);
>   	}
> +
> +	pages = 1;
> +	folio = shmem_alloc_folio(gfp, 0, info, index);
>   	if (!folio)
>   		return ERR_PTR(-ENOMEM);
>   


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ