[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <69b463e5-9854-496d-b461-4bf65e82bc0a@redhat.com>
Date: Mon, 29 Sep 2025 14:08:17 +0200
From: David Hildenbrand <david@...hat.com>
To: Lance Yang <lance.yang@...ux.dev>
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com, baohua@...nel.org,
ryan.roberts@....com, dev.jain@....com, npache@...hat.com, riel@...riel.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, harry.yoo@...cle.com,
jannh@...gle.com, matthew.brost@...el.com, joshua.hahnjy@...il.com,
rakie.kim@...com, byungchul@...com, gourry@...rry.net,
ying.huang@...ux.alibaba.com, apopple@...dia.com, usamaarif642@...il.com,
yuzhao@...gle.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
ioworker0@...il.com, stable@...r.kernel.org, akpm@...ux-foundation.org,
lorenzo.stoakes@...cle.com
Subject: Re: [PATCH 1/1] mm/rmap: fix soft-dirty bit loss when remapping
zero-filled mTHP subpage to shared zeropage
On 29.09.25 13:29, Lance Yang wrote:
>
>
> On 2025/9/29 18:29, Lance Yang wrote:
>>
>>
>> On 2025/9/29 15:25, David Hildenbrand wrote:
>>> On 28.09.25 06:48, Lance Yang wrote:
>>>> From: Lance Yang <lance.yang@...ux.dev>
>>>>
>>>> When splitting an mTHP and replacing a zero-filled subpage with the
>>>> shared
>>>> zeropage, try_to_map_unused_to_zeropage() currently drops the soft-dirty
>>>> bit.
>>>>
>>>> For userspace tools like CRIU, which rely on the soft-dirty mechanism
>>>> for
>>>> incremental snapshots, losing this bit means modified pages are missed,
>>>> leading to inconsistent memory state after restore.
>>>>
>>>> Preserve the soft-dirty bit from the old PTE when creating the zeropage
>>>> mapping to ensure modified pages are correctly tracked.
>>>>
>>>> Cc: <stable@...r.kernel.org>
>>>> Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage
>>>> when splitting isolated thp")
>>>> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
>>>> ---
>>>> mm/migrate.c | 4 ++++
>>>> 1 file changed, 4 insertions(+)
>>>>
>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>> index ce83c2c3c287..bf364ba07a3f 100644
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -322,6 +322,10 @@ static bool try_to_map_unused_to_zeropage(struct
>>>> page_vma_mapped_walk *pvmw,
>>>> newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),
>>>> pvmw->vma->vm_page_prot));
>>>> +
>>>> + if (pte_swp_soft_dirty(ptep_get(pvmw->pte)))
>>>> + newpte = pte_mksoft_dirty(newpte);
>>>> +
>>>> set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte);
>>>> dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio));
>>>
>>> It's interesting that there isn't a single occurrence of the stof-
>>> dirty flag in khugepaged code. I guess it all works because we do the
>>>
>>> _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
>>>
>>> and the pmd_mkdirty() will imply marking it soft-dirty.
>>>
>>> Now to the problem at hand: I don't think this is particularly
>>> problematic in the common case: if the page is zero, it likely was
>>> never written to (that's what the unerused shrinker is targeted at),
>>> so the soft-dirty setting on the PMD is actually just an over-
>>> indication for this page.
>>
>> Cool. Thanks for the insight! Good to know that ;)
>>
>>>
>>> For example, when we just install the shared zeropage directly in
>>> do_anonymous_page(), we obviously also don't set it dirty/soft-dirty.
>>>
>>> Now, one could argue that if the content was changed from non-zero to
>>> zero, it ould actually be soft-dirty.
>>
>> Exactly. A false negative could be a problem for the userspace tools, IMO.
>>
>>>
>>> Long-story short: I don't think this matters much in practice, but
>>> it's an easy fix.
>>>
>>> As said by dev, please avoid double ptep_get() if possible.
>>
>> Sure, will do. I'll refactor it in the next version.
>>
>>>
>>> Acked-by: David Hildenbrand <david@...hat.com>
>>
>> Thanks!
>>
>>>
>>>
>>> @Lance, can you double-check that the uffd-wp bit is handled
>>> correctly? I strongly assume we lose that as well here.
>
> Yes, the uffd-wp bit was indeed being dropped, but ...
>
> The shared zeropage is read-only, which triggers a fault. IIUC,
> The kernel then falls back to checking the VM_UFFD_WP flag on
> the VMA and correctly generates a uffd-wp event, masking the
> fact that the uffd-wp bit on the PTE was lost.
That's not how VM_UFFD_WP works :)
--
Cheers
David / dhildenb
Powered by blists - more mailing lists