[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e88cbc23-16af-458e-9f5f-6b06eff0d8f5@linux.dev>
Date: Mon, 29 Sep 2025 18:15:29 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: Dev Jain <dev.jain@....com>
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com, baohua@...nel.org,
ryan.roberts@....com, npache@...hat.com, riel@...riel.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, harry.yoo@...cle.com,
jannh@...gle.com, matthew.brost@...el.com, joshua.hahnjy@...il.com,
rakie.kim@...com, byungchul@...com, gourry@...rry.net,
ying.huang@...ux.alibaba.com, apopple@...dia.com, usamaarif642@...il.com,
yuzhao@...gle.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
ioworker0@...il.com, stable@...r.kernel.org, akpm@...ux-foundation.org,
lorenzo.stoakes@...cle.com, david@...hat.com
Subject: Re: [PATCH 1/1] mm/rmap: fix soft-dirty bit loss when remapping
zero-filled mTHP subpage to shared zeropage
On 2025/9/29 12:44, Dev Jain wrote:
>
> On 28/09/25 10:18 am, Lance Yang wrote:
>> From: Lance Yang <lance.yang@...ux.dev>
>>
>> When splitting an mTHP and replacing a zero-filled subpage with the
>> shared
>> zeropage, try_to_map_unused_to_zeropage() currently drops the soft-dirty
>> bit.
>>
>> For userspace tools like CRIU, which rely on the soft-dirty mechanism for
>> incremental snapshots, losing this bit means modified pages are missed,
>> leading to inconsistent memory state after restore.
>>
>> Preserve the soft-dirty bit from the old PTE when creating the zeropage
>> mapping to ensure modified pages are correctly tracked.
>>
>> Cc: <stable@...r.kernel.org>
>> Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage
>> when splitting isolated thp")
>> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
>> ---
>> mm/migrate.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index ce83c2c3c287..bf364ba07a3f 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -322,6 +322,10 @@ static bool try_to_map_unused_to_zeropage(struct
>> page_vma_mapped_walk *pvmw,
>> newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),
>> pvmw->vma->vm_page_prot));
>> +
>> + if (pte_swp_soft_dirty(ptep_get(pvmw->pte)))
>> + newpte = pte_mksoft_dirty(newpte);
>> +
>> set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte);
>> dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio));
>
> I think this should work.
>
> You can pass old_pte = ptep_get(pvmw->pte) to this function to avoid
> calling ptep_get()
> multiple times.
Good catch! Will do in v2, thanks.
Powered by blists - more mailing lists