lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251016073154.6vfydmo6lnvgyuzz@master>
Date: Thu, 16 Oct 2025 07:31:54 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Zi Yan <ziy@...dia.com>
Cc: linmiaohe@...wei.com, david@...hat.com, jane.chu@...cle.com,
	kernel@...kajraghav.com,
	syzbot+e6367ea2fdab6ed46056@...kaller.appspotmail.com,
	syzkaller-bugs@...glegroups.com, akpm@...ux-foundation.org,
	mcgrof@...nel.org, nao.horiguchi@...il.com,
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
	Baolin Wang <baolin.wang@...ux.alibaba.com>,
	"Liam R. Howlett" <Liam.Howlett@...cle.com>,
	Nico Pache <npache@...hat.com>, Ryan Roberts <ryan.roberts@....com>,
	Dev Jain <dev.jain@....com>, Barry Song <baohua@...nel.org>,
	Lance Yang <lance.yang@...ux.dev>,
	"Matthew Wilcox (Oracle)" <willy@...radead.org>,
	Wei Yang <richard.weiyang@...il.com>, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Pankaj Raghav <p.raghav@...sung.com>
Subject: Re: [PATCH v2 1/3] mm/huge_memory: do not change split_huge_page*()
 target order silently.

On Wed, Oct 15, 2025 at 11:34:50PM -0400, Zi Yan wrote:
>Page cache folios from a file system that support large block size (LBS)
>can have minimal folio order greater than 0, thus a high order folio might
>not be able to be split down to order-0. Commit e220917fa507 ("mm: split a
>folio in minimum folio order chunks") bumps the target order of
>split_huge_page*() to the minimum allowed order when splitting a LBS folio.
>This causes confusion for some split_huge_page*() callers like memory
>failure handling code, since they expect after-split folios all have
>order-0 when split succeeds but in really get min_order_for_split() order
>folios.
>
>Fix it by failing a split if the folio cannot be split to the target order.
>Rename try_folio_split() to try_folio_split_to_order() to reflect the added
>new_order parameter. Remove its unused list parameter.
>
>Fixes: e220917fa507 ("mm: split a folio in minimum folio order chunks")
>[The test poisons LBS folios, which cannot be split to order-0 folios, and
>also tries to poison all memory. The non split LBS folios take more memory
>than the test anticipated, leading to OOM. The patch fixed the kernel
>warning and the test needs some change to avoid OOM.]
>Reported-by: syzbot+e6367ea2fdab6ed46056@...kaller.appspotmail.com
>Closes: https://lore.kernel.org/all/68d2c943.a70a0220.1b52b.02b3.GAE@google.com/
>Signed-off-by: Zi Yan <ziy@...dia.com>
>Reviewed-by: Luis Chamberlain <mcgrof@...nel.org>
>Reviewed-by: Pankaj Raghav <p.raghav@...sung.com>

Do we want to cc stable?

>---
> include/linux/huge_mm.h | 55 +++++++++++++++++------------------------
> mm/huge_memory.c        |  9 +------
> mm/truncate.c           |  6 +++--
> 3 files changed, 28 insertions(+), 42 deletions(-)
>
>diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>index c4a811958cda..3d9587f40c0b 100644
>--- a/include/linux/huge_mm.h
>+++ b/include/linux/huge_mm.h
>@@ -383,45 +383,30 @@ static inline int split_huge_page_to_list_to_order(struct page *page, struct lis
> }
> 
> /*
>- * try_folio_split - try to split a @folio at @page using non uniform split.
>+ * try_folio_split_to_order - try to split a @folio at @page to @new_order using
>+ * non uniform split.
>  * @folio: folio to be split
>- * @page: split to order-0 at the given page
>- * @list: store the after-split folios
>+ * @page: split to @order at the given page

split to @new_order?

>+ * @new_order: the target split order
>  *
>- * Try to split a @folio at @page using non uniform split to order-0, if
>- * non uniform split is not supported, fall back to uniform split.
>+ * Try to split a @folio at @page using non uniform split to @new_order, if
>+ * non uniform split is not supported, fall back to uniform split. After-split
>+ * folios are put back to LRU list. Use min_order_for_split() to get the lower
>+ * bound of @new_order.

We removed min_order_for_split() here right?

>  *
>  * Return: 0: split is successful, otherwise split failed.
>  */
>-static inline int try_folio_split(struct folio *folio, struct page *page,
>-		struct list_head *list)
>+static inline int try_folio_split_to_order(struct folio *folio,
>+		struct page *page, unsigned int new_order)
> {
>-	int ret = min_order_for_split(folio);
>-
>-	if (ret < 0)
>-		return ret;
>-
>-	if (!non_uniform_split_supported(folio, 0, false))
>-		return split_huge_page_to_list_to_order(&folio->page, list,
>-				ret);
>-	return folio_split(folio, ret, page, list);
>+	if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>+		return split_huge_page_to_list_to_order(&folio->page, NULL,
>+				new_order);
>+	return folio_split(folio, new_order, page, NULL);
> }

-- 
Wei Yang
Help you, Help me

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ