[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251123034901.nqza7nlg57ivobzu@master>
Date: Sun, 23 Nov 2025 03:49:01 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Balbir Singh <balbirs@...dia.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
dri-devel@...ts.freedesktop.org,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>, Zi Yan <ziy@...dia.com>,
Joshua Hahn <joshua.hahnjy@...il.com>, Rakie Kim <rakie.kim@...com>,
Byungchul Park <byungchul@...com>,
Gregory Price <gourry@...rry.net>,
Ying Huang <ying.huang@...ux.alibaba.com>,
Alistair Popple <apopple@...dia.com>,
Oscar Salvador <osalvador@...e.de>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Nico Pache <npache@...hat.com>, Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>, Barry Song <baohua@...nel.org>,
Lyude Paul <lyude@...hat.com>, Danilo Krummrich <dakr@...nel.org>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Ralph Campbell <rcampbell@...dia.com>,
Mika Penttilä <mpenttil@...hat.com>,
Matthew Brost <matthew.brost@...el.com>,
Francois Dugast <francois.dugast@...el.com>
Subject: Re: [PATCH v2] fixup: mm/huge_memory.c: introduce
folio_split_unmapped
On Fri, Nov 21, 2025 at 12:42:32AM +1100, Balbir Singh wrote:
>Code refactoring of __folio_split() via helper
>__folio_freeze_and_split_unmapped() caused a regression with clang-20
>with CONFIG_SHMEM=n, the compiler was not able to optimize away the
>call to shmem_uncharge() due to changes in nr_shmem_dropped.
>Fix this by adding a stub function for shmem_uncharge when
>CONFIG_SHMEM is not defined.
>
>smatch also complained about parameter end being used without
>initialization, which is a false positive, but keep the tool happy
>by sending in initialized parameters. end is initialized to 0.
>smatch still complains about mapping being NULL and nr_shmem_dropped
>may not be 0, but that is not true prior to or after the changes.
>
>Add detailed documentation comments for folio_split_unmapped()
>
>Cc: Andrew Morton <akpm@...ux-foundation.org>
>Cc: David Hildenbrand <david@...hat.com>
>Cc: Zi Yan <ziy@...dia.com>
>Cc: Joshua Hahn <joshua.hahnjy@...il.com>
>Cc: Rakie Kim <rakie.kim@...com>
>Cc: Byungchul Park <byungchul@...com>
>Cc: Gregory Price <gourry@...rry.net>
>Cc: Ying Huang <ying.huang@...ux.alibaba.com>
>Cc: Alistair Popple <apopple@...dia.com>
>Cc: Oscar Salvador <osalvador@...e.de>
>Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
>Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>
>Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>
>Cc: Nico Pache <npache@...hat.com>
>Cc: Ryan Roberts <ryan.roberts@....com>
>Cc: Dev Jain <dev.jain@....com>
>Cc: Barry Song <baohua@...nel.org>
>Cc: Lyude Paul <lyude@...hat.com>
>Cc: Danilo Krummrich <dakr@...nel.org>
>Cc: David Airlie <airlied@...il.com>
>Cc: Simona Vetter <simona@...ll.ch>
>Cc: Ralph Campbell <rcampbell@...dia.com>
>Cc: Mika Penttilä <mpenttil@...hat.com>
>Cc: Matthew Brost <matthew.brost@...el.com>
>Cc: Francois Dugast <francois.dugast@...el.com>
>
>Suggested-by: David Hildenbrand <david@...hat.com>
>Signed-off-by: Balbir Singh <balbirs@...dia.com>
>---
>This fixup should be squashed into the patch "mm/huge_memory.c:
>introduce folio_split_unmapped" in mm/mm-unstable
>
> include/linux/shmem_fs.h | 6 +++++-
> mm/huge_memory.c | 30 +++++++++++++++++++++---------
> 2 files changed, 26 insertions(+), 10 deletions(-)
>
>diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
>index 5b368f9549d6..7a412dd6eb4f 100644
>--- a/include/linux/shmem_fs.h
>+++ b/include/linux/shmem_fs.h
>@@ -136,11 +136,16 @@ static inline bool shmem_hpage_pmd_enabled(void)
>
> #ifdef CONFIG_SHMEM
> extern unsigned long shmem_swap_usage(struct vm_area_struct *vma);
>+extern void shmem_uncharge(struct inode *inode, long pages);
> #else
> static inline unsigned long shmem_swap_usage(struct vm_area_struct *vma)
> {
> return 0;
> }
>+
>+static void shmem_uncharge(struct inode *inode, long pages)
>+{
>+}
> #endif
> extern unsigned long shmem_partial_swap_usage(struct address_space *mapping,
> pgoff_t start, pgoff_t end);
>@@ -194,7 +199,6 @@ static inline pgoff_t shmem_fallocend(struct inode *inode, pgoff_t eof)
> }
>
> extern bool shmem_charge(struct inode *inode, long pages);
>-extern void shmem_uncharge(struct inode *inode, long pages);
>
> #ifdef CONFIG_USERFAULTFD
> #ifdef CONFIG_SHMEM
>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>index 78a31a476ad3..18c12876f5e8 100644
>--- a/mm/huge_memory.c
>+++ b/mm/huge_memory.c
>@@ -3751,6 +3751,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n
> int ret = 0;
> struct deferred_split *ds_queue;
>
>+ VM_WARN_ON_ONCE(!mapping && end);
> /* Prevent deferred_split_scan() touching ->_refcount */
> ds_queue = folio_split_queue_lock(folio);
> if (folio_ref_freeze(folio, 1 + extra_pins)) {
>@@ -3919,7 +3920,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> int nr_shmem_dropped = 0;
> int remap_flags = 0;
> int extra_pins, ret;
>- pgoff_t end;
>+ pgoff_t end = 0;
> bool is_hzp;
>
> VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio);
>@@ -4092,16 +4093,27 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> return ret;
> }
>
>-/*
>- * This function is a helper for splitting folios that have already been unmapped.
>- * The use case is that the device or the CPU can refuse to migrate THP pages in
>- * the middle of migration, due to allocation issues on either side
>+/**
>+ * folio_split_unmapped() - split a large anon folio that is already unmapped
>+ * @folio: folio to split
>+ * @new_order: the order of folios after split
>+ *
>+ * This function is a helper for splitting folios that have already been
>+ * unmapped. The use case is that the device or the CPU can refuse to migrate
>+ * THP pages in the middle of migration, due to allocation issues on either
>+ * side.
>+ *
>+ * anon_vma_lock is not required to be held, mmap_read_lock() or
>+ * mmap_write_lock() should be held. @folio is expected to be locked by the
Took a look into its caller:
__migrate_device_pages()
migrate_vma_split_unmapped_folio()
folio_split_unmapped()
I don't see where get the folio lock.
Would you mind giving me a hint where we toke the lock? Seems I missed that.
>+ * caller. device-private and non device-private folios are supported along
>+ * with folios that are in the swapcache. @folio should also be unmapped and
>+ * isolated from LRU (if applicable)
> *
>- * The high level code is copied from __folio_split, since the pages are anonymous
>- * and are already isolated from the LRU, the code has been simplified to not
>- * burden __folio_split with unmapped sprinkled into the code.
>+ * Upon return, the folio is not remapped, split folios are not added to LRU,
>+ * free_folio_and_swap_cache() is not called, and new folios remain locked.
> *
>- * None of the split folios are unlocked
>+ * Return: 0 on success, -EAGAIN if the folio cannot be split (e.g., due to
>+ * insufficient reference count or extra pins).
> */
> int folio_split_unmapped(struct folio *folio, unsigned int new_order)
> {
>--
>2.51.1
>
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists