lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 10 Apr 2024 10:47:18 +0200
From: David Hildenbrand <david@...hat.com>
To: Alex Shi <seakeel@...il.com>, alexs@...nel.org,
 Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Cc: Izik Eidus <izik.eidus@...ellosystems.com>,
 Matthew Wilcox <willy@...radead.org>, Andrea Arcangeli
 <aarcange@...hat.com>, Hugh Dickins <hughd@...gle.com>,
 Chris Wright <chrisw@...s-sol.org>
Subject: Re: [PATCH v4 8/9] mm/ksm: Convert chain series funcs and replace
 get_ksm_page

On 10.04.24 05:47, Alex Shi wrote:
> 
> 
> On 4/9/24 7:02 PM, David Hildenbrand wrote:
>> On 09.04.24 11:28, alexs@...nel.org wrote:
>>> From: "Alex Shi (tencent)" <alexs@...nel.org>
>>>
>>> In ksm stable tree all page are single, let's convert them to use and
>>> folios as well as stable_tree_insert/stable_tree_search funcs.
>>> And replace get_ksm_page() by ksm_get_folio() since there is no more
>>> needs.
>>>
>>> It could save a few compound_head calls.
>>>
>>> Signed-off-by: Alex Shi (tencent) <alexs@...nel.org>
>>> Cc: Izik Eidus <izik.eidus@...ellosystems.com>
>>> Cc: Matthew Wilcox <willy@...radead.org>
>>> Cc: Andrea Arcangeli <aarcange@...hat.com>
>>> Cc: Hugh Dickins <hughd@...gle.com>
>>> Cc: Chris Wright <chrisw@...s-sol.org>
>>> Reviewed-by: David Hildenbrand <david@...hat.com>
>>
>> I don't recall giving that yet :)
> 
> Ops...
> Sorry for misunderstand!

No worries :)

>>
>> You could have kept some get_ksm_page()->ksm_get_folio() into a separate patch.
>>
>> i.e., "[PATCH v3 11/14] mm/ksm: remove get_ksm_page and related info" from your old series could have mostly stayed separately.
>>
> 
> Yes, but the 12th and 11th patches are kind of depends each other, like after merge 8,9,10,12th with get_ksm_page replaced,
> 
> ../mm/ksm.c:993:21: error: ‘get_ksm_page’ defined but not used [-Werror=unused-function]
>    993 | static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
>        |                     ^~~~~~~~~~~~
> 
> so we have to squash the 11th and 12th if we want to merge 12th with 8,9,10...
> or we can do just merge the 8,9,10 and keep 11th, 12th as your first suggestion?
> 

I see what you mean. Including removal is certainly required there, as you remove
the last user.

It might make sense to move some cleanups+comment adjustments from
"[PATCH v3 11/14] mm/ksm: remove get_ksm_page and related info" into relevant patches.

After Patch #1 in this series, I would do

 From 38a6f6017bf91d9a8869316b711b594909caa5ed Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@...hat.com>
Date: Wed, 10 Apr 2024 10:32:24 +0200
Subject: [PATCH] mm/ksm: rename get_ksm_page_flags() to ksm_get_folio_flags

As we are removing get_ksm_page_flags(), make the flags match the new
function name.

Signed-off-by: David Hildenbrand <david@...hat.com>
---
  mm/ksm.c | 34 +++++++++++++++++-----------------
  1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index ac080235b002..fd2666e6bda1 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -890,10 +890,10 @@ static void remove_node_from_stable_tree(struct ksm_stable_node *stable_node)
  	free_stable_node(stable_node);
  }
  
-enum get_ksm_page_flags {
-	GET_KSM_PAGE_NOLOCK,
-	GET_KSM_PAGE_LOCK,
-	GET_KSM_PAGE_TRYLOCK
+enum ksm_get_folio_flags {
+	KSM_GET_FOLIO_NOLOCK,
+	KSM_GET_FOLIO_LOCK,
+	KSM_GET_FOLIO_TRYLOCK
  };
  
  /*
@@ -916,7 +916,7 @@ enum get_ksm_page_flags {
   * is on its way to being freed; but it is an anomaly to bear in mind.
   */
  static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
-				 enum get_ksm_page_flags flags)
+				 enum ksm_get_folio_flags flags)
  {
  	struct folio *folio;
  	void *expected_mapping;
@@ -959,15 +959,15 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
  		goto stale;
  	}
  
-	if (flags == GET_KSM_PAGE_TRYLOCK) {
+	if (flags == KSM_GET_FOLIO_TRYLOCK) {
  		if (!folio_trylock(folio)) {
  			folio_put(folio);
  			return ERR_PTR(-EBUSY);
  		}
-	} else if (flags == GET_KSM_PAGE_LOCK)
+	} else if (flags == KSM_GET_FOLIO_LOCK)
  		folio_lock(folio);
  
-	if (flags != GET_KSM_PAGE_NOLOCK) {
+	if (flags != KSM_GET_FOLIO_NOLOCK) {
  		if (READ_ONCE(folio->mapping) != expected_mapping) {
  			folio_unlock(folio);
  			folio_put(folio);
@@ -991,7 +991,7 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
  }
  
  static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
-				 enum get_ksm_page_flags flags)
+				 enum ksm_get_folio_flags flags)
  {
  	struct folio *folio = ksm_get_folio(stable_node, flags);
  
@@ -1009,7 +1009,7 @@ static void remove_rmap_item_from_tree(struct ksm_rmap_item *rmap_item)
  		struct page *page;
  
  		stable_node = rmap_item->head;
-		page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
+		page = get_ksm_page(stable_node, KSM_GET_FOLIO_LOCK);
  		if (!page)
  			goto out;
  
@@ -1118,7 +1118,7 @@ static int remove_stable_node(struct ksm_stable_node *stable_node)
  	struct page *page;
  	int err;
  
-	page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
+	page = get_ksm_page(stable_node, KSM_GET_FOLIO_LOCK);
  	if (!page) {
  		/*
  		 * get_ksm_page did remove_node_from_stable_tree itself.
@@ -1657,7 +1657,7 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
  		 * stable_node parameter itself will be freed from
  		 * under us if it returns NULL.
  		 */
-		_tree_page = get_ksm_page(dup, GET_KSM_PAGE_NOLOCK);
+		_tree_page = get_ksm_page(dup, KSM_GET_FOLIO_NOLOCK);
  		if (!_tree_page)
  			continue;
  		nr += 1;
@@ -1780,7 +1780,7 @@ static struct page *__stable_node_chain(struct ksm_stable_node **_stable_node_du
  	if (!is_stable_node_chain(stable_node)) {
  		if (is_page_sharing_candidate(stable_node)) {
  			*_stable_node_dup = stable_node;
-			return get_ksm_page(stable_node, GET_KSM_PAGE_NOLOCK);
+			return get_ksm_page(stable_node, KSM_GET_FOLIO_NOLOCK);
  		}
  		/*
  		 * _stable_node_dup set to NULL means the stable_node
@@ -1886,7 +1886,7 @@ static struct page *stable_tree_search(struct page *page)
  			 * fine to continue the walk.
  			 */
  			tree_page = get_ksm_page(stable_node_any,
-						 GET_KSM_PAGE_NOLOCK);
+						 KSM_GET_FOLIO_NOLOCK);
  		}
  		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
  		if (!tree_page) {
@@ -1947,7 +1947,7 @@ static struct page *stable_tree_search(struct page *page)
  			 * than kpage, but that involves more changes.
  			 */
  			tree_page = get_ksm_page(stable_node_dup,
-						 GET_KSM_PAGE_TRYLOCK);
+						 KSM_GET_FOLIO_TRYLOCK);
  
  			if (PTR_ERR(tree_page) == -EBUSY)
  				return ERR_PTR(-EBUSY);
@@ -2119,7 +2119,7 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
  			 * fine to continue the walk.
  			 */
  			tree_page = get_ksm_page(stable_node_any,
-						 GET_KSM_PAGE_NOLOCK);
+						 KSM_GET_FOLIO_NOLOCK);
  		}
  		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
  		if (!tree_page) {
@@ -2610,7 +2610,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
  			list_for_each_entry_safe(stable_node, next,
  						 &migrate_nodes, list) {
  				page = get_ksm_page(stable_node,
-						    GET_KSM_PAGE_NOLOCK);
+						    KSM_GET_FOLIO_NOLOCK);
  				if (page)
  					put_page(page);
  				cond_resched();
-- 
2.44.0


Then, adjust the get_ksm_page() comments as you change the code:

In "[PATCH v4 4/9] mm/ksm: use folio in remove_stable_node", adjust the two comments
in remove_stable_node() to state "ksm_get_folio".

In "[PATCH v4 5/9] mm/ksm: use folio in stable_node_dup", adjust the comment in
stable_node_dup().

In "[PATCH v4 8/9] mm/ksm: Convert chain series funcs and replace get_ksm_page",
adjust the comments for __stable_node_chain(), stable_tree_insert() and stable_tree_search().


Then, the remaining comments are in folio_migrate_ksm(), stable_node_dup_remove_range(),
ksm_memory_callback() and folio_migrate_flags(), and I'd just fix them up in a separate
patch after this patch here called something like "mm/ksm: update remaining comments now
that get_ksm_page() is gone".


But that's just one way of removing some "noise" from this squashed patch :)

>> [...]
>>
>>>    /*
>>> @@ -1829,7 +1821,7 @@ static __always_inline struct page *chain(struct ksm_stable_node **s_n_d,
>>>     * This function returns the stable tree node of identical content if found,
>>>     * NULL otherwise.
>>>     */
>>> -static struct page *stable_tree_search(struct page *page)
>>> +static void *stable_tree_search(struct page *page)
>>
>> There is one caller of stable_tree_search() in cmp_and_merge_page().
>>
>> Why the change from page* to void* ?
> 
> Uh, a bit more changes needs if we want to remove void*.

You could keep it page* and return &folio->page, right? Then you could convert stable_tree_search() in a separate patch to return a folio* instead.

Sorry if I am missing something.

> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 0d703c3da9d8..cd414a9c33ad 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1815,7 +1815,7 @@ static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d,
>    * This function returns the stable tree node of identical content if found,
>    * NULL otherwise.
>    */
> -static void *stable_tree_search(struct page *page)
> +static struct folio *stable_tree_search(struct page *page)
>   {
>          int nid;
>          struct rb_root *root;
> @@ -2308,6 +2308,7 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
>          struct page *tree_page = NULL;
>          struct ksm_stable_node *stable_node;
>          struct page *kpage;
> +       struct folio *folio;
>          unsigned int checksum;
>          int err;
>          bool max_page_sharing_bypass = false;
> @@ -2333,7 +2334,8 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
>          }
>   
>          /* We first start with searching the page inside the stable tree */
> -       kpage = stable_tree_search(page);
> +       folio = stable_tree_search(page);
> +       kpage = &folio->page;
>          if (kpage == page && rmap_item->head == stable_node) {
>                  put_page(kpage);
>                  return;
> 
>> I suspect cmp_and_merge_page() could similarly be converted to some degree to let kpage be a folio (separate patch).
>>
> 
> Yes, there are couple of changes needed for cmp_and_merge_page() and series try_to_merge_xx_pages(), I am going to change them on the next series patches. Guess 2 phases patches are better for a big/huge one, is this right?

Smaller patches are preferable. But if we have to add temporary workarounds (like using void*), it might be better to have the relevant
cleanups part of a single patch.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ