[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a6574561-02bc-4ba6-9fb4-418dcb07cd5f@kernel.org>
Date: Mon, 3 Nov 2025 18:00:08 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Pedro Demarchi Gomes <pedrodemargomes@...il.com>,
David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Xu Xin <xu.xin16@....com.cn>, Chengming Zhou <chengming.zhou@...ux.dev>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/3] Revert "mm/ksm: convert break_ksm() from
walk_page_range_vma() to folio_walk"
On 31.10.25 18:46, Pedro Demarchi Gomes wrote:
> This reverts commit e317a8d8b4f600fc7ec9725e26417030ee594f52 and changes
> function break_ksm_pmd_entry() to use folios.
>
> This reverts break_ksm() to use walk_page_range_vma() instead of
> folio_walk_start().
> This will make it easier to later modify break_ksm() to perform a proper
> range walk.
>
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Pedro Demarchi Gomes <pedrodemargomes@...il.com>
> ---
> mm/ksm.c | 63 ++++++++++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 47 insertions(+), 16 deletions(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 4f672f4f2140..922d2936e206 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -607,6 +607,47 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
> return atomic_read(&mm->mm_users) == 0;
> }
>
> +static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
> + struct mm_walk *walk)
> +{
> + struct folio *folio = NULL;
> + spinlock_t *ptl;
> + pte_t *pte;
> + pte_t ptent;
> + int ret;
> +
> + pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> + if (!pte)
> + return 0;
> + ptent = ptep_get(pte);
> + if (pte_present(ptent)) {
> + folio = vm_normal_folio(walk->vma, addr, ptent);
> + } else if (!pte_none(ptent)) {
> + swp_entry_t entry = pte_to_swp_entry(ptent);
> +
> + /*
> + * As KSM pages remain KSM pages until freed, no need to wait
> + * here for migration to end.
> + */
> + if (is_migration_entry(entry))
> + folio = pfn_swap_entry_folio(entry);
> + }
> + /* return 1 if the page is an normal ksm page or KSM-placed zero page */
> + ret = (folio && folio_test_ksm(folio)) || is_ksm_zero_pte(ptent);
Staring again, we should really call is_ksm_zero_pte() only if we know
the folio is present.
It's not super dangerous in the old code (because we would only look at
present an migration entries), but now you are making it possible to
call it on even more non-present ptes.
With that handled
Acked-by: David Hildenbrand (Red Hat) <david@...nel.org>
--
Cheers
David
Powered by blists - more mailing lists