[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251120170515.46504-1-matthew.brost@intel.com>
Date: Thu, 20 Nov 2025 09:05:15 -0800
From: Matthew Brost <matthew.brost@...el.com>
To: linux-kernel@...r.kernel.org,
dri-devel@...ts.freedesktop.org,
linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Zi Yan <ziy@...dia.com>,
Joshua Hahn <joshua.hahnjy@...il.com>,
Rakie Kim <rakie.kim@...com>,
Byungchul Park <byungchul@...com>,
Gregory Price <gourry@...rry.net>,
Ying Huang <ying.huang@...ux.alibaba.com>,
Alistair Popple <apopple@...dia.com>,
Oscar Salvador <osalvador@...e.de>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>,
Lyude Paul <lyude@...hat.com>,
Danilo Krummrich <dakr@...nel.org>,
David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>,
Ralph Campbell <rcampbell@...dia.com>,
Mika Penttilä <mpenttil@...hat.com>,
Francois Dugast <francois.dugast@...el.com>,
Balbir Singh <balbirs@...dia.com>
Subject: [PATCH] fixup: mm/migrate_device: handle partially mapped folios during
Splitting a partially mapped folio caused a regression in the Intel Xe
SVM test suite in the mremap section, resulting in the following stack
trace:
NFO: task kworker/u65:2:1642 blocked for more than 30 seconds.
[ 212.624286] Tainted: G S W 6.18.0-rc6-xe+ #1719
[ 212.638288] Workqueue: xe_page_fault_work_queue xe_pagefault_queue_work [xe]
[ 212.638323] Call Trace:
[ 212.638324] <TASK>
[ 212.638325] __schedule+0x4b0/0x990
[ 212.638330] schedule+0x22/0xd0
[ 212.638331] io_schedule+0x41/0x60
[ 212.638333] migration_entry_wait_on_locked+0x1d8/0x2d0
[ 212.638336] ? __pfx_wake_page_function+0x10/0x10
[ 212.638339] migration_entry_wait+0xd2/0xe0
[ 212.638341] hmm_vma_walk_pmd+0x7c9/0x8d0
[ 212.638343] walk_pgd_range+0x51d/0xa40
[ 212.638345] __walk_page_range+0x75/0x1e0
[ 212.638347] walk_page_range_mm+0x138/0x1f0
[ 212.638349] hmm_range_fault+0x59/0xa0
[ 212.638351] drm_gpusvm_get_pages+0x194/0x7b0 [drm_gpusvm_helper]
[ 212.638354] drm_gpusvm_range_get_pages+0x2d/0x40 [drm_gpusvm_helper]
[ 212.638355] __xe_svm_handle_pagefault+0x259/0x900 [xe]
[ 212.638375] ? update_load_avg+0x7f/0x6c0
[ 212.638377] ? update_curr+0x13d/0x170
[ 212.638379] xe_svm_handle_pagefault+0x37/0x90 [xe]
[ 212.638396] xe_pagefault_queue_work+0x2da/0x3c0 [xe]
[ 212.638420] process_one_work+0x16e/0x2e0
[ 212.638422] worker_thread+0x284/0x410
[ 212.638423] ? __pfx_worker_thread+0x10/0x10
[ 212.638425] kthread+0xec/0x210
[ 212.638427] ? __pfx_kthread+0x10/0x10
[ 212.638428] ? __pfx_kthread+0x10/0x10
[ 212.638430] ret_from_fork+0xbd/0x100
[ 212.638433] ? __pfx_kthread+0x10/0x10
[ 212.638434] ret_from_fork_asm+0x1a/0x30
[ 212.638436] </TASK>
The issue appears to be that migration PTEs are not properly removed
after a split.
This change refactors the code to perform the split in a slightly
different manner while retaining the original patch’s intent. With this
update, the Intel Xe SVM test suite fully passes.
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Hildenbrand <david@...hat.com>
Cc: Zi Yan <ziy@...dia.com>
Cc: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: Rakie Kim <rakie.kim@...com>
Cc: Byungchul Park <byungchul@...com>
Cc: Gregory Price <gourry@...rry.net>
Cc: Ying Huang <ying.huang@...ux.alibaba.com>
Cc: Alistair Popple <apopple@...dia.com>
Cc: Oscar Salvador <osalvador@...e.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: Liam R. Howlett <Liam.Howlett@...cle.com>
Cc: Nico Pache <npache@...hat.com>
Cc: Ryan Roberts <ryan.roberts@....com>
Cc: Dev Jain <dev.jain@....com>
Cc: Barry Song <baohua@...nel.org>
Cc: Lyude Paul <lyude@...hat.com>
Cc: Danilo Krummrich <dakr@...nel.org>
Cc: David Airlie <airlied@...il.com>
Cc: Simona Vetter <simona@...ll.ch>
Cc: Ralph Campbell <rcampbell@...dia.com>
Cc: Mika Penttilä <mpenttil@...hat.com>
Cc: Francois Dugast <francois.dugast@...el.com>
Cc: Balbir Singh <balbirs@...dia.com>
Signed-off-by: Matthew Brost <matthew.brost@...el.com>
---
This fixup should be squashed into the patch "mm/migrate_device: handle
partially mapped folios during" in mm/mm-unstable
I replaced the original patch with a local patch I authored a while back
that solves the same problem but uses a different code structure. The
failing test case—only available on an Xe driver—passes with this patch.
I can attempt to fix up the original patch within its structure if
that’s preferred.
---
mm/migrate_device.c | 42 ++++++++++++++++++++++++------------------
1 file changed, 24 insertions(+), 18 deletions(-)
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index fa42d2ebd024..69e88f4a2563 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -254,6 +254,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
spinlock_t *ptl;
struct folio *fault_folio = migrate->fault_page ?
page_folio(migrate->fault_page) : NULL;
+ struct folio *split_folio = NULL;
pte_t *ptep;
again:
@@ -266,10 +267,11 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
return 0;
}
- ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
+ ptep = pte_offset_map_lock(mm, pmdp, start, &ptl);
if (!ptep)
goto again;
arch_enter_lazy_mmu_mode();
+ ptep += (addr - start) / PAGE_SIZE;
for (; addr < end; addr += PAGE_SIZE, ptep++) {
struct dev_pagemap *pgmap;
@@ -347,22 +349,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
pgmap->owner != migrate->pgmap_owner)
goto next;
}
- folio = page ? page_folio(page) : NULL;
- if (folio && folio_test_large(folio)) {
- int ret;
-
- pte_unmap_unlock(ptep, ptl);
- ret = migrate_vma_split_folio(folio,
- migrate->fault_page);
-
- if (ret) {
- ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
- goto next;
- }
-
- addr = start;
- goto again;
- }
mpfn = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE;
mpfn |= pte_write(pte) ? MIGRATE_PFN_WRITE : 0;
}
@@ -400,6 +386,11 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
bool anon_exclusive;
pte_t swp_pte;
+ if (folio_order(folio)) {
+ split_folio = folio;
+ goto split;
+ }
+
flush_cache_page(vma, addr, pte_pfn(pte));
anon_exclusive = folio_test_anon(folio) &&
PageAnonExclusive(page);
@@ -478,8 +469,23 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
if (unmapped)
flush_tlb_range(walk->vma, start, end);
+split:
arch_leave_lazy_mmu_mode();
- pte_unmap_unlock(ptep - 1, ptl);
+ pte_unmap_unlock(ptep - 1 + !!split_folio, ptl);
+
+ if (split_folio) {
+ int ret;
+
+ ret = split_folio(split_folio);
+ if (fault_folio != split_folio)
+ folio_unlock(split_folio);
+ folio_put(split_folio);
+ if (ret)
+ return migrate_vma_collect_skip(addr, end, walk);
+
+ split_folio = NULL;
+ goto again;
+ }
return 0;
}
--
2.34.1
Powered by blists - more mailing lists