[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e1f3074b-c4d4-47e3-9303-18ba254e3662@nvidia.com>
Date: Sat, 15 Nov 2025 13:32:00 +1100
From: Balbir Singh <balbirs@...dia.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
dri-devel@...ts.freedesktop.org, David Hildenbrand <david@...hat.com>,
Zi Yan <ziy@...dia.com>, Joshua Hahn <joshua.hahnjy@...il.com>,
Rakie Kim <rakie.kim@...com>, Byungchul Park <byungchul@...com>,
Gregory Price <gourry@...rry.net>, Ying Huang
<ying.huang@...ux.alibaba.com>, Alistair Popple <apopple@...dia.com>,
Oscar Salvador <osalvador@...e.de>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>, Lyude Paul <lyude@...hat.com>,
Danilo Krummrich <dakr@...nel.org>, David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>, Ralph Campbell <rcampbell@...dia.com>,
Mika Penttilä <mpenttil@...hat.com>,
Matthew Brost <matthew.brost@...el.com>,
Francois Dugast <francois.dugast@...el.com>
Subject: Re: [PATCH] fixup: mm/rmap: extend rmap and migration support
device-private entries
On 11/15/25 11:51, Andrew Morton wrote:
> On Sat, 15 Nov 2025 11:28:35 +1100 Balbir Singh <balbirs@...dia.com> wrote:
>
>> Follow the pattern used in remove_migration_pte() in
>> remove_migration_pmd(). Process the migration entries and if the entry
>> type is device private, override the pmde with a device private entry
>> and set the soft dirty and uffd_wp bits with the pmd_swp_mksoft_dirty
>> and pmd_swp_mkuffd_wp
>>
>> ...
>>
>> This fixup should be squashed into the patch "mm/rmap: extend rmap and
>> migration support" of mm/mm-unstable
>>
>
> OK. After fixing up
> mm-replace-pmd_to_swp_entry-with-softleaf_from_pmd.patch, mm.git's
> mm/huge_memory.c has the below. Please double-check.
>
>
> void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
> {
> struct folio *folio = page_folio(new);
> struct vm_area_struct *vma = pvmw->vma;
> struct mm_struct *mm = vma->vm_mm;
> unsigned long address = pvmw->address;
> unsigned long haddr = address & HPAGE_PMD_MASK;
> pmd_t pmde;
> softleaf_t entry;
>
> if (!(pvmw->pmd && !pvmw->pte))
> return;
>
> entry = softleaf_from_pmd(*pvmw->pmd);
> folio_get(folio);
> pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot));
>
> if (pmd_swp_soft_dirty(*pvmw->pmd))
> pmde = pmd_mksoft_dirty(pmde);
> if (softleaf_is_migration_write(entry))
> pmde = pmd_mkwrite(pmde, vma);
> if (pmd_swp_uffd_wp(*pvmw->pmd))
> pmde = pmd_mkuffd_wp(pmde);
> if (!softleaf_is_migration_young(entry))
> pmde = pmd_mkold(pmde);
> /* NOTE: this may contain setting soft-dirty on some archs */
> if (folio_test_dirty(folio) && softleaf_is_migration_dirty(entry))
> pmde = pmd_mkdirty(pmde);
>
> if (folio_is_device_private(folio)) {
> swp_entry_t entry;
>
> if (pmd_write(pmde))
> entry = make_writable_device_private_entry(
> page_to_pfn(new));
> else
> entry = make_readable_device_private_entry(
> page_to_pfn(new));
> pmde = swp_entry_to_pmd(entry);
>
> if (pmd_swp_soft_dirty(*pvmw->pmd))
> pmde = pmd_swp_mksoft_dirty(pmde);
> if (pmd_swp_uffd_wp(*pvmw->pmd))
> pmde = pmd_swp_mkuffd_wp(pmde);
> }
>
> if (folio_test_anon(folio)) {
> rmap_t rmap_flags = RMAP_NONE;
>
> if (!softleaf_is_migration_read(entry))
> rmap_flags |= RMAP_EXCLUSIVE;
>
> folio_add_anon_rmap_pmd(folio, new, vma, haddr, rmap_flags);
> } else {
> folio_add_file_rmap_pmd(folio, new, vma);
> }
> VM_BUG_ON(pmd_write(pmde) && folio_test_anon(folio) && !PageAnonExclusive(new));
> set_pmd_at(mm, haddr, pvmw->pmd, pmde);
>
> /* No need to invalidate - it was non-present before */
> update_mmu_cache_pmd(vma, address, pvmw->pmd);
> trace_remove_migration_pmd(address, pmd_val(pmde));
> }
Thanks, Andrew! Looks good!
Balbir
Powered by blists - more mailing lists