[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2065263d-a2c0-437e-a096-695c6d17f97a@arm.com>
Date: Mon, 29 Sep 2025 10:14:10 +0530
From: Dev Jain <dev.jain@....com>
To: Lance Yang <lance.yang@...ux.dev>, akpm@...ux-foundation.org,
david@...hat.com, lorenzo.stoakes@...cle.com
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com, baohua@...nel.org,
ryan.roberts@....com, npache@...hat.com, riel@...riel.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, harry.yoo@...cle.com,
jannh@...gle.com, matthew.brost@...el.com, joshua.hahnjy@...il.com,
rakie.kim@...com, byungchul@...com, gourry@...rry.net,
ying.huang@...ux.alibaba.com, apopple@...dia.com, usamaarif642@...il.com,
yuzhao@...gle.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
ioworker0@...il.com, stable@...r.kernel.org
Subject: Re: [PATCH 1/1] mm/rmap: fix soft-dirty bit loss when remapping
zero-filled mTHP subpage to shared zeropage
On 28/09/25 10:18 am, Lance Yang wrote:
> From: Lance Yang <lance.yang@...ux.dev>
>
> When splitting an mTHP and replacing a zero-filled subpage with the shared
> zeropage, try_to_map_unused_to_zeropage() currently drops the soft-dirty
> bit.
>
> For userspace tools like CRIU, which rely on the soft-dirty mechanism for
> incremental snapshots, losing this bit means modified pages are missed,
> leading to inconsistent memory state after restore.
>
> Preserve the soft-dirty bit from the old PTE when creating the zeropage
> mapping to ensure modified pages are correctly tracked.
>
> Cc: <stable@...r.kernel.org>
> Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp")
> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
> ---
> mm/migrate.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index ce83c2c3c287..bf364ba07a3f 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -322,6 +322,10 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
>
> newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),
> pvmw->vma->vm_page_prot));
> +
> + if (pte_swp_soft_dirty(ptep_get(pvmw->pte)))
> + newpte = pte_mksoft_dirty(newpte);
> +
> set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte);
>
> dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio));
I think this should work.
You can pass old_pte = ptep_get(pvmw->pte) to this function to avoid calling ptep_get()
multiple times.
Powered by blists - more mailing lists