[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251115002835.3515194-1-balbirs@nvidia.com>
Date: Sat, 15 Nov 2025 11:28:35 +1100
From: Balbir Singh <balbirs@...dia.com>
To: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
dri-devel@...ts.freedesktop.org
Cc: Balbir Singh <balbirs@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Zi Yan <ziy@...dia.com>,
Joshua Hahn <joshua.hahnjy@...il.com>,
Rakie Kim <rakie.kim@...com>,
Byungchul Park <byungchul@...com>,
Gregory Price <gourry@...rry.net>,
Ying Huang <ying.huang@...ux.alibaba.com>,
Alistair Popple <apopple@...dia.com>,
Oscar Salvador <osalvador@...e.de>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>,
Lyude Paul <lyude@...hat.com>,
Danilo Krummrich <dakr@...nel.org>,
David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>,
Ralph Campbell <rcampbell@...dia.com>,
Mika Penttilä <mpenttil@...hat.com>,
Matthew Brost <matthew.brost@...el.com>,
Francois Dugast <francois.dugast@...el.com>
Subject: [PATCH] fixup: mm/rmap: extend rmap and migration support device-private entries
Follow the pattern used in remove_migration_pte() in
remove_migration_pmd(). Process the migration entries and if the entry
type is device private, override the pmde with a device private entry
and set the soft dirty and uffd_wp bits with the pmd_swp_mksoft_dirty
and pmd_swp_mkuffd_wp
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Hildenbrand <david@...hat.com>
Cc: Zi Yan <ziy@...dia.com>
Cc: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: Rakie Kim <rakie.kim@...com>
Cc: Byungchul Park <byungchul@...com>
Cc: Gregory Price <gourry@...rry.net>
Cc: Ying Huang <ying.huang@...ux.alibaba.com>
Cc: Alistair Popple <apopple@...dia.com>
Cc: Oscar Salvador <osalvador@...e.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>
Cc: Nico Pache <npache@...hat.com>
Cc: Ryan Roberts <ryan.roberts@....com>
Cc: Dev Jain <dev.jain@....com>
Cc: Barry Song <baohua@...nel.org>
Cc: Lyude Paul <lyude@...hat.com>
Cc: Danilo Krummrich <dakr@...nel.org>
Cc: David Airlie <airlied@...il.com>
Cc: Simona Vetter <simona@...ll.ch>
Cc: Ralph Campbell <rcampbell@...dia.com>
Cc: Mika Penttilä <mpenttil@...hat.com>
Cc: Matthew Brost <matthew.brost@...el.com>
Cc: Francois Dugast <francois.dugast@...el.com>
Signed-off-by: Balbir Singh <balbirs@...dia.com>
---
This fixup should be squashed into the patch "mm/rmap: extend rmap and
migration support" of mm/mm-unstable
mm/huge_memory.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9dda8c48daca..50ba458efcab 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -4698,16 +4698,6 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
folio_get(folio);
pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot));
- if (folio_is_device_private(folio)) {
- if (pmd_write(pmde))
- entry = make_writable_device_private_entry(
- page_to_pfn(new));
- else
- entry = make_readable_device_private_entry(
- page_to_pfn(new));
- pmde = swp_entry_to_pmd(entry);
- }
-
if (pmd_swp_soft_dirty(*pvmw->pmd))
pmde = pmd_mksoft_dirty(pmde);
if (is_writable_migration_entry(entry))
@@ -4720,6 +4710,23 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
if (folio_test_dirty(folio) && is_migration_entry_dirty(entry))
pmde = pmd_mkdirty(pmde);
+ if (folio_is_device_private(folio)) {
+ swp_entry_t entry;
+
+ if (pmd_write(pmde))
+ entry = make_writable_device_private_entry(
+ page_to_pfn(new));
+ else
+ entry = make_readable_device_private_entry(
+ page_to_pfn(new));
+ pmde = swp_entry_to_pmd(entry);
+
+ if (pmd_swp_soft_dirty(*pvmw->pmd))
+ pmde = pmd_swp_mksoft_dirty(pmde);
+ if (pmd_swp_uffd_wp(*pvmw->pmd))
+ pmde = pmd_swp_mkuffd_wp(pmde);
+ }
+
if (folio_test_anon(folio)) {
rmap_t rmap_flags = RMAP_NONE;
--
2.51.1
Powered by blists - more mailing lists