lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bf832253-0052-4ab2-b664-33bec3837c23@redhat.com>
Date: Wed, 29 Oct 2025 15:46:36 +0100
From: David Hildenbrand <david@...hat.com>
To: Pedro Demarchi Gomes <pedrodemargomes@...il.com>,
 Andrew Morton <akpm@...ux-foundation.org>
Cc: Xu Xin <xu.xin16@....com.cn>, Chengming Zhou <chengming.zhou@...ux.dev>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] ksm: replace function unmerge_ksm_pages with
 break_ksm

On 28.10.25 14:19, Pedro Demarchi Gomes wrote:
> Function unmerge_ksm_pages() is unnecessary since now break_ksm() walks
> an address range. So replace it with break_ksm().
> 
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Pedro Demarchi Gomes <pedrodemargomes@...il.com>
> ---
>   mm/ksm.c | 39 ++++++++++++++++-----------------------
>   1 file changed, 16 insertions(+), 23 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 1d1ef0554c7c..18c9e3bda285 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -669,6 +669,18 @@ static const struct mm_walk_ops break_ksm_lock_vma_ops = {
>   };
>   
>   /*
> + * Though it's very tempting to unmerge rmap_items from stable tree rather
> + * than check every pte of a given vma, the locking doesn't quite work for
> + * that - an rmap_item is assigned to the stable tree after inserting ksm
> + * page and upping mmap_lock.  Nor does it fit with the way we skip dup'ing
> + * rmap_items from parent to child at fork time (so as not to waste time
> + * if exit comes before the next scan reaches it).
> + *
> + * Similarly, although we'd like to remove rmap_items (so updating counts
> + * and freeing memory) when unmerging an area, it's easier to leave that
> + * to the next pass of ksmd - consider, for example, how ksmd might be
> + * in cmp_and_merge_page on one of the rmap_items we would be removing.
> + *
>    * We use break_ksm to break COW on a ksm page by triggering unsharing,
>    * such that the ksm page will get replaced by an exclusive anonymous page.
>    *
> @@ -1077,25 +1089,6 @@ static void remove_trailing_rmap_items(struct ksm_rmap_item **rmap_list)
>   	}
>   }
>   
> -/*
> - * Though it's very tempting to unmerge rmap_items from stable tree rather
> - * than check every pte of a given vma, the locking doesn't quite work for
> - * that - an rmap_item is assigned to the stable tree after inserting ksm
> - * page and upping mmap_lock.  Nor does it fit with the way we skip dup'ing
> - * rmap_items from parent to child at fork time (so as not to waste time
> - * if exit comes before the next scan reaches it).
> - *
> - * Similarly, although we'd like to remove rmap_items (so updating counts
> - * and freeing memory) when unmerging an area, it's easier to leave that
> - * to the next pass of ksmd - consider, for example, how ksmd might be
> - * in cmp_and_merge_page on one of the rmap_items we would be removing.
> - */
> -static int unmerge_ksm_pages(struct vm_area_struct *vma,
> -			     unsigned long start, unsigned long end, bool lock_vma)
> -{
> -	return break_ksm(vma, start, end, lock_vma);
> -}
> -
>   static inline
>   struct ksm_stable_node *folio_stable_node(const struct folio *folio)
>   {
> @@ -1233,7 +1226,7 @@ static int unmerge_and_remove_all_rmap_items(void)
>   		for_each_vma(vmi, vma) {
>   			if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
>   				continue;
> -			err = unmerge_ksm_pages(vma,
> +			err = break_ksm(vma,
>   						vma->vm_start, vma->vm_end, false);

Move that all into a single line.


With that

Acked-by: David Hildenbrand <david@...hat.com>

Thanks for tackling this!

-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ