lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0701c9d9-b9b3-4313-8783-8e6d1dbec94d@linux.dev>
Date: Mon, 29 Sep 2025 21:22:28 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: David Hildenbrand <david@...hat.com>
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com, baohua@...nel.org,
 ryan.roberts@....com, dev.jain@....com, npache@...hat.com, riel@...riel.com,
 Liam.Howlett@...cle.com, vbabka@...e.cz, harry.yoo@...cle.com,
 jannh@...gle.com, matthew.brost@...el.com, joshua.hahnjy@...il.com,
 rakie.kim@...com, byungchul@...com, gourry@...rry.net,
 ying.huang@...ux.alibaba.com, apopple@...dia.com, usamaarif642@...il.com,
 yuzhao@...gle.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 ioworker0@...il.com, stable@...r.kernel.org, akpm@...ux-foundation.org,
 lorenzo.stoakes@...cle.com
Subject: Re: [PATCH 1/1] mm/rmap: fix soft-dirty bit loss when remapping
 zero-filled mTHP subpage to shared zeropage



On 2025/9/29 20:08, David Hildenbrand wrote:
> On 29.09.25 13:29, Lance Yang wrote:
>>
>>
>> On 2025/9/29 18:29, Lance Yang wrote:
>>>
>>>
>>> On 2025/9/29 15:25, David Hildenbrand wrote:
>>>> On 28.09.25 06:48, Lance Yang wrote:
>>>>> From: Lance Yang <lance.yang@...ux.dev>
>>>>>
>>>>> When splitting an mTHP and replacing a zero-filled subpage with the
>>>>> shared
>>>>> zeropage, try_to_map_unused_to_zeropage() currently drops the soft- 
>>>>> dirty
>>>>> bit.
>>>>>
>>>>> For userspace tools like CRIU, which rely on the soft-dirty mechanism
>>>>> for
>>>>> incremental snapshots, losing this bit means modified pages are 
>>>>> missed,
>>>>> leading to inconsistent memory state after restore.
>>>>>
>>>>> Preserve the soft-dirty bit from the old PTE when creating the 
>>>>> zeropage
>>>>> mapping to ensure modified pages are correctly tracked.
>>>>>
>>>>> Cc: <stable@...r.kernel.org>
>>>>> Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage
>>>>> when splitting isolated thp")
>>>>> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
>>>>> ---
>>>>>    mm/migrate.c | 4 ++++
>>>>>    1 file changed, 4 insertions(+)
>>>>>
>>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>>> index ce83c2c3c287..bf364ba07a3f 100644
>>>>> --- a/mm/migrate.c
>>>>> +++ b/mm/migrate.c
>>>>> @@ -322,6 +322,10 @@ static bool try_to_map_unused_to_zeropage(struct
>>>>> page_vma_mapped_walk *pvmw,
>>>>>        newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),
>>>>>                        pvmw->vma->vm_page_prot));
>>>>> +
>>>>> +    if (pte_swp_soft_dirty(ptep_get(pvmw->pte)))
>>>>> +        newpte = pte_mksoft_dirty(newpte);
>>>>> +
>>>>>        set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte);
>>>>>        dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio));
>>>>
>>>> It's interesting that there isn't a single occurrence of the stof-
>>>> dirty flag in khugepaged code. I guess it all works because we do the
>>>>
>>>>       _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
>>>>
>>>> and the pmd_mkdirty() will imply marking it soft-dirty.
>>>>
>>>> Now to the problem at hand: I don't think this is particularly
>>>> problematic in the common case: if the page is zero, it likely was
>>>> never written to (that's what the unerused shrinker is targeted at),
>>>> so the soft-dirty setting on the PMD is actually just an over-
>>>> indication for this page.
>>>
>>> Cool. Thanks for the insight! Good to know that ;)
>>>
>>>>
>>>> For example, when we just install the shared zeropage directly in
>>>> do_anonymous_page(), we obviously also don't set it dirty/soft-dirty.
>>>>
>>>> Now, one could argue that if the content was changed from non-zero to
>>>> zero, it ould actually be soft-dirty.
>>>
>>> Exactly. A false negative could be a problem for the userspace tools, 
>>> IMO.
>>>
>>>>
>>>> Long-story short: I don't think this matters much in practice, but
>>>> it's an easy fix.
>>>>
>>>> As said by dev, please avoid double ptep_get() if possible.
>>>
>>> Sure, will do. I'll refactor it in the next version.
>>>
>>>>
>>>> Acked-by: David Hildenbrand <david@...hat.com>
>>>
>>> Thanks!
>>>
>>>>
>>>>
>>>> @Lance, can you double-check that the uffd-wp bit is handled
>>>> correctly? I strongly assume we lose that as well here.
>>
>> Yes, the uffd-wp bit was indeed being dropped, but ...
>>
>> The shared zeropage is read-only, which triggers a fault. IIUC,
>> The kernel then falls back to checking the VM_UFFD_WP flag on
>> the VMA and correctly generates a uffd-wp event, masking the
>> fact that the uffd-wp bit on the PTE was lost.
> 
> That's not how VM_UFFD_WP works :)

My bad! Please accept my apologies for the earlier confusion :(

I messed up my test environment (forgot to enable mTHP), which
led me to a completely wrong conclusion...

You're spot on. With mTHP enabled, the WP fault was not caught
on the shared zeropage after it replaced a zero-filled subpage
during an mTHP split.

This is because do_wp_page() requires userfaultfd_pte_wp() to
be true, which in turn needs both userfaultfd_wp(vma) and
pte_uffd_wp(pte).

static inline bool userfaultfd_pte_wp(struct vm_area_struct *vma,
				      pte_t pte)
{
	return userfaultfd_wp(vma) && pte_uffd_wp(pte);
}

userfaultfd_pte_wp() fails as we lose the uffd-wp bit on the PTE ...

Please correct me if I missed something important!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ