[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <63653e44-3a30-46e6-8a3e-f62d73f3f6a8@redhat.com>
Date: Wed, 5 Nov 2025 18:52:09 +0100
From: David Hildenbrand <dhildenb@...hat.com>
To: Pedro Demarchi Gomes <pedrodemargomes@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Xu Xin <xu.xin16@....com.cn>, Chengming Zhou <chengming.zhou@...ux.dev>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/3] ksm: perform a range-walk in break_ksm
> + folio = vm_normal_folio(walk->vma, addr, pte);
> + } else if (!pte_none(pte)) {
> + swp_entry_t entry = pte_to_swp_entry(pte);
> +
> + /*
> + * As KSM pages remain KSM pages until freed, no need to wait
> + * here for migration to end.
> + */
> + if (is_migration_entry(entry))
> + folio = pfn_swap_entry_folio(entry);
> + }
> + /* return 1 if the page is an normal ksm page or KSM-placed zero page */
> + found = (folio && folio_test_ksm(folio)) || (pte_present(pte)
> + && is_ksm_zero_pte(pte));
Same NIT as for previous patch.
Apart from that LGTM, thanks!
Acked-by: David Hildenbrand (Red Hat) <david@...nel.org>
--
Cheers
David
Powered by blists - more mailing lists