[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250904095129.222316-1-kernel@pankajraghav.com>
Date: Thu, 4 Sep 2025 11:51:29 +0200
From: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>
To: Zi Yan <ziy@...dia.com>,
Ryan Roberts <ryan.roberts@....com>,
David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Barry Song <baohua@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Nico Pache <npache@...hat.com>,
Dev Jain <dev.jain@....com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>
Cc: linux-kernel@...r.kernel.org,
kernel@...kajraghav.com,
willy@...radead.org,
linux-mm@...ck.org,
mcgrof@...nel.org,
gost.dev@...sung.com,
Pankaj Raghav <p.raghav@...sung.com>
Subject: [PATCH v2] huge_memory: return -EINVAL in folio split functions when THP is disabled
From: Pankaj Raghav <p.raghav@...sung.com>
split_huge_page_to_list_[to_order](), split_huge_page() and
try_folio_split() return 0 on success and error codes on failure.
When THP is disabled, these functions return 0 indicating success even
though an error code should be returned as it is not possible to split a
folio when THP is disabled.
Make all these functions return -EINVAL to indicate failure instead of
0. As large folios depend on CONFIG_THP, issue warning as this function
should not be called without a large folio.
Signed-off-by: Pankaj Raghav <p.raghav@...sung.com>
---
This issue was discovered while experimenting enabling large folios
without THP and found that returning 0 in these functions is resulting in
undefined behavior in truncate operations. This change fixes the issue.
include/linux/huge_mm.h | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 29ef70022da1..23f124493c47 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -588,22 +588,30 @@ static inline int
split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
unsigned int new_order)
{
- return 0;
+ struct folio *folio = page_folio(page);
+
+ VM_WARN_ON_ONCE_FOLIO(1, folio);
+ return -EINVAL;
}
static inline int split_huge_page(struct page *page)
{
- return 0;
+ struct folio *folio = page_folio(page);
+
+ VM_WARN_ON_ONCE_FOLIO(1, folio);
+ return -EINVAL;
}
static inline int split_folio_to_list(struct folio *folio, struct list_head *list)
{
- return 0;
+ VM_WARN_ON_ONCE_FOLIO(1, folio);
+ return -EINVAL;
}
static inline int try_folio_split(struct folio *folio, struct page *page,
struct list_head *list)
{
- return 0;
+ VM_WARN_ON_ONCE_FOLIO(1, folio);
+ return -EINVAL;
}
static inline void deferred_split_folio(struct folio *folio, bool partially_mapped) {}
base-commit: 291634ccfd2820c09f6e8c4982c2dee8155d09ae
--
2.50.1
Powered by blists - more mailing lists