[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c6e5aa34-25ab-44f9-b6dc-127f3084adc8@kernel.org>
Date: Wed, 5 Nov 2025 15:21:18 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Pedro Demarchi Gomes <pedrodemargomes@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Xu Xin <xu.xin16@....com.cn>,
Chengming Zhou <chengming.zhou@...ux.dev>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/3] Revert "mm/ksm: convert break_ksm() from
walk_page_range_vma() to folio_walk"
On 05.11.25 14:28, Pedro Demarchi Gomes wrote:
> On Mon, Nov 03, 2025 at 06:00:08PM +0100, David Hildenbrand (Red Hat) wrote:
>> On 31.10.25 18:46, Pedro Demarchi Gomes wrote:
>>> This reverts commit e317a8d8b4f600fc7ec9725e26417030ee594f52 and changes
>>> function break_ksm_pmd_entry() to use folios.
>>>
>>> This reverts break_ksm() to use walk_page_range_vma() instead of
>>> folio_walk_start().
>>> This will make it easier to later modify break_ksm() to perform a proper
>>> range walk.
>>>
>>> Suggested-by: David Hildenbrand <david@...hat.com>
>>> Signed-off-by: Pedro Demarchi Gomes <pedrodemargomes@...il.com>
>>> ---
>>> mm/ksm.c | 63 ++++++++++++++++++++++++++++++++++++++++++--------------
>>> 1 file changed, 47 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/mm/ksm.c b/mm/ksm.c
>>> index 4f672f4f2140..922d2936e206 100644
>>> --- a/mm/ksm.c
>>> +++ b/mm/ksm.c
>>> @@ -607,6 +607,47 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
>>> return atomic_read(&mm->mm_users) == 0;
>>> }
>>> +static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
>>> + struct mm_walk *walk)
>>> +{
>>> + struct folio *folio = NULL;
>>> + spinlock_t *ptl;
>>> + pte_t *pte;
>>> + pte_t ptent;
>>> + int ret;
>>> +
>>> + pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
>>> + if (!pte)
>>> + return 0;
>>> + ptent = ptep_get(pte);
>>> + if (pte_present(ptent)) {
>>> + folio = vm_normal_folio(walk->vma, addr, ptent);
>>> + } else if (!pte_none(ptent)) {
>>> + swp_entry_t entry = pte_to_swp_entry(ptent);
>>> +
>>> + /*
>>> + * As KSM pages remain KSM pages until freed, no need to wait
>>> + * here for migration to end.
>>> + */
>>> + if (is_migration_entry(entry))
>>> + folio = pfn_swap_entry_folio(entry);
>>> + }
>>> + /* return 1 if the page is an normal ksm page or KSM-placed zero page */
>>> + ret = (folio && folio_test_ksm(folio)) || is_ksm_zero_pte(ptent);
>>
>> Staring again, we should really call is_ksm_zero_pte() only if we know the
>> folio is present.
>>
>> It's not super dangerous in the old code (because we would only look at
>> present an migration entries), but now you are making it possible to call it
>> on even more non-present ptes.
>>
>
> IIUC vm_normal_folio will return NULL in the case of a ksm zero pte, so
> we can not do
> found = folio && (folio_test_ksm(folio) || is_ksm_zero_pte(pte))
> because it will always be false for a ksm zero pte.
> So we should do
> found = (folio && folio_test_ksm(folio)) || (pte_present(ptent)
> && is_ksm_zero_pte(ptent));
> since if the pte is present and is a zero pte we can guarantee that
> the folio is present.
Yes exactly.
--
Cheers
David
Powered by blists - more mailing lists