lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 08 May 2024 17:55:41 +0800
From: Chengming Zhou <chengming.zhou@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>, 
 David Hildenbrand <david@...hat.com>, Stefan Roesch <shr@...kernel.io>, 
 xu xin <xu.xin16@....com.cn>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
 zhouchengming@...edance.com, Chengming Zhou <chengming.zhou@...ux.dev>
Subject: [PATCH 4/4] mm/ksm: calculate general_profit more accurately

The memory resource of KSM is mainly ksm_rmap_item, which has to allocate
for each anon page that mm has mapped on. Another memory resource is the
ksm_stable_node, which is much less than the ksm_rmap_item.

We can account it easily to make general_profit calculation more accurate.
This is important when max_page_sharing is limited and so we have more
chained nodes.

Signed-off-by: Chengming Zhou <chengming.zhou@...ux.dev>
---
 mm/ksm.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 87ffd228944c..a9ce17e6814d 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -267,6 +267,9 @@ static unsigned long ksm_pages_unshared;
 /* The number of rmap_items in use: to calculate pages_volatile */
 static unsigned long ksm_rmap_items;
 
+/* The number of stable_node */
+static unsigned long ksm_stable_nodes;
+
 /* The number of stable_node chains */
 static unsigned long ksm_stable_node_chains;
 
@@ -584,12 +587,17 @@ static inline void free_rmap_item(struct ksm_rmap_item *rmap_item)
 
 static inline struct ksm_stable_node *alloc_stable_node(void)
 {
+	struct ksm_stable_node *node;
+
 	/*
 	 * The allocation can take too long with GFP_KERNEL when memory is under
 	 * pressure, which may lead to hung task warnings.  Adding __GFP_HIGH
 	 * grants access to memory reserves, helping to avoid this problem.
 	 */
-	return kmem_cache_alloc(stable_node_cache, GFP_KERNEL | __GFP_HIGH);
+	node = kmem_cache_alloc(stable_node_cache, GFP_KERNEL | __GFP_HIGH);
+	if (likely(node))
+		ksm_stable_nodes++;
+	return node;
 }
 
 static inline void free_stable_node(struct ksm_stable_node *stable_node)
@@ -597,6 +605,7 @@ static inline void free_stable_node(struct ksm_stable_node *stable_node)
 	VM_BUG_ON(stable_node->rmap_hlist_len &&
 		  !is_stable_node_chain(stable_node));
 	kmem_cache_free(stable_node_cache, stable_node);
+	ksm_stable_nodes--;
 }
 
 /*
@@ -3671,7 +3680,8 @@ static ssize_t general_profit_show(struct kobject *kobj,
 	long general_profit;
 
 	general_profit = (ksm_pages_sharing + get_ksm_zero_pages()) * PAGE_SIZE -
-				ksm_rmap_items * sizeof(struct ksm_rmap_item);
+				ksm_rmap_items * sizeof(struct ksm_rmap_item) -
+				ksm_stable_nodes * sizeof(struct ksm_stable_node);
 
 	return sysfs_emit(buf, "%ld\n", general_profit);
 }

-- 
2.45.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ