[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <010f0f19-8b87-4537-8c0b-bc8f9263aab4@nvidia.com>
Date: Thu, 20 Nov 2025 15:28:33 +1100
From: Balbir Singh <balbirs@...dia.com>
To: Zi Yan <ziy@...dia.com>, David Hildenbrand <david@...nel.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>, Lance Yang <lance.yang@...ux.dev>,
Miaohe Lin <linmiaohe@...wei.com>, Naoya Horiguchi
<nao.horiguchi@...il.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 1/3] mm/huge_memory: prevent NULL pointer dereference
in try_folio_split_to_order()
On 11/20/25 14:59, Zi Yan wrote:
> folio_split_supported() used in try_folio_split_to_order() requires
> folio->mapping to be non NULL, but current try_folio_split_to_order() does
> not check it. Add the check to prevent NULL pointer dereference.
>
> There is no issue in the current code, since try_folio_split_to_order() is
> only used in truncate_inode_partial_folio(), where folio->mapping is not
> NULL.
>
Just reading through the description does not clarify one thing
What is the race between just truncated and trying to split them - is there a common lock
that needs to be held? Is it the subsequent call in truncate_inode_partial_folio()
that causes the race?
IOW, if a folio is not anonymous and does not have a mapping, how is it
being passed to this function?
> Signed-off-by: Zi Yan <ziy@...dia.com>
> ---
> include/linux/huge_mm.h | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 1d439de1ca2c..0d55354e3a34 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -407,6 +407,13 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
> static inline int try_folio_split_to_order(struct folio *folio,
> struct page *page, unsigned int new_order)
> {
> + /*
> + * Folios that just got truncated cannot get split. Signal to the
> + * caller that there was a race.
> + */
> + if (!folio_test_anon(folio) && !folio->mapping)
> + return -EBUSY;
> +
> if (!folio_split_supported(folio, new_order, SPLIT_TYPE_NON_UNIFORM, /* warns= */ false))
> return split_huge_page_to_order(&folio->page, new_order);
> return folio_split(folio, new_order, page, NULL);
Powered by blists - more mailing lists