[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251120030709.2933665-1-balbirs@nvidia.com>
Date: Thu, 20 Nov 2025 14:07:09 +1100
From: Balbir Singh <balbirs@...dia.com>
To: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
dri-devel@...ts.freedesktop.org
Cc: Balbir Singh <balbirs@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Zi Yan <ziy@...dia.com>,
Joshua Hahn <joshua.hahnjy@...il.com>,
Rakie Kim <rakie.kim@...com>,
Byungchul Park <byungchul@...com>,
Gregory Price <gourry@...rry.net>,
Ying Huang <ying.huang@...ux.alibaba.com>,
Alistair Popple <apopple@...dia.com>,
Oscar Salvador <osalvador@...e.de>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>,
Lyude Paul <lyude@...hat.com>,
Danilo Krummrich <dakr@...nel.org>,
David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>,
Ralph Campbell <rcampbell@...dia.com>,
Mika Penttilä <mpenttil@...hat.com>,
Matthew Brost <matthew.brost@...el.com>,
Francois Dugast <francois.dugast@...el.com>
Subject: [PATCH] fixup: mm/huge_memory.c: introduce folio_split_unmapped
Code refactoring of __folio_split() via helper
__folio_freeze_and_split_unmapped() caused a regression with clang-20
with CONFIG_SHMEM=n, the compiler was not able to optimize away the
call to shmem_uncharge() due to changes in nr_shmem_dropped.
Fix this by checking for shmem_mapping() prior to calling
shmem_uncharge(), shmem_mapping() returns false when CONFIG_SHMEM=n.
smatch also complained about parameter end being used without
initialization, which is a false positive, but keep the tool happy
by sending in initialized parameters. end is initialized to 0.
Add detailed documentation comments for folio_split_unmapped()
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Hildenbrand <david@...hat.com>
Cc: Zi Yan <ziy@...dia.com>
Cc: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: Rakie Kim <rakie.kim@...com>
Cc: Byungchul Park <byungchul@...com>
Cc: Gregory Price <gourry@...rry.net>
Cc: Ying Huang <ying.huang@...ux.alibaba.com>
Cc: Alistair Popple <apopple@...dia.com>
Cc: Oscar Salvador <osalvador@...e.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>
Cc: Nico Pache <npache@...hat.com>
Cc: Ryan Roberts <ryan.roberts@....com>
Cc: Dev Jain <dev.jain@....com>
Cc: Barry Song <baohua@...nel.org>
Cc: Lyude Paul <lyude@...hat.com>
Cc: Danilo Krummrich <dakr@...nel.org>
Cc: David Airlie <airlied@...il.com>
Cc: Simona Vetter <simona@...ll.ch>
Cc: Ralph Campbell <rcampbell@...dia.com>
Cc: Mika Penttilä <mpenttil@...hat.com>
Cc: Matthew Brost <matthew.brost@...el.com>
Cc: Francois Dugast <francois.dugast@...el.com>
Signed-off-by: Balbir Singh <balbirs@...dia.com>
---
mm/huge_memory.c | 32 ++++++++++++++++++++++----------
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 78a31a476ad3..c4267a0f74df 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3751,6 +3751,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n
int ret = 0;
struct deferred_split *ds_queue;
+ VM_WARN_ON_ONCE(!mapping && end != 0);
/* Prevent deferred_split_scan() touching ->_refcount */
ds_queue = folio_split_queue_lock(folio);
if (folio_ref_freeze(folio, 1 + extra_pins)) {
@@ -3919,7 +3920,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
int nr_shmem_dropped = 0;
int remap_flags = 0;
int extra_pins, ret;
- pgoff_t end;
+ pgoff_t end = 0;
bool is_hzp;
VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio);
@@ -4049,7 +4050,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
local_irq_enable();
- if (nr_shmem_dropped)
+ if (mapping && shmem_mapping(mapping) && nr_shmem_dropped)
shmem_uncharge(mapping->host, nr_shmem_dropped);
if (!ret && is_anon && !folio_is_device_private(folio))
@@ -4092,16 +4093,27 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
return ret;
}
-/*
- * This function is a helper for splitting folios that have already been unmapped.
- * The use case is that the device or the CPU can refuse to migrate THP pages in
- * the middle of migration, due to allocation issues on either side
+/**
+ * folio_split_unmapped() - split a large anon folio that is already unmapped
+ * @folio: folio to split
+ * @new_order: the order of folios after split
+ *
+ * This function is a helper for splitting folios that have already been
+ * unmapped. The use case is that the device or the CPU can refuse to migrate
+ * THP pages in the middle of migration, due to allocation issues on either
+ * side.
+ *
+ * anon_vma_lock is not required to be held, mmap_read_lock() or
+ * mmap_write_lock() should be held. @folio is expected to be locked by the
+ * caller. device-private and non device-private folios are supported along
+ * with folios that are in the swapcache. @folio should also be unmapped and
+ * isolated from LRU (if applicable)
*
- * The high level code is copied from __folio_split, since the pages are anonymous
- * and are already isolated from the LRU, the code has been simplified to not
- * burden __folio_split with unmapped sprinkled into the code.
+ * Upon return, the folio is not remapped, split folios are not added to LRU,
+ * free_folio_and_swap_cache() is not called, and new folios remain locked.
*
- * None of the split folios are unlocked
+ * Return: 0 on success, -EAGAIN if the folio cannot be split (e.g., due to
+ * insufficient reference count or extra pins).
*/
int folio_split_unmapped(struct folio *folio, unsigned int new_order)
{
--
2.51.1
Powered by blists - more mailing lists