lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Oct 2022 01:12:13 +0900
From:   Sergey Senozhatsky <senozhatsky@...omium.org>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Minchan Kim <minchan@...nel.org>
Cc:     Nitin Gupta <ngupta@...are.org>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: [PATCH 6/6] zsmalloc: make sure we select best zspage size

We currently decide the best zspage size by looking at
used percentage value. This is not exactly enough as
zspage usage percentage calculation is not accurate.

For example, let's look at size class 208

pages per zspage       wasted bytes         used%
       1                   144               96
       2                    80               99
       3                    16               99
       4                   160               99

We will select 2 page per zspage configuration, as it
is the first one to reach 99%. However, 3 pages per
zspage wastes less memory. Hence we need to also consider
wasted space metrics when device zspage size.

Additionally, rename max_usedpc_order because it does
not hold zspage order, it holds maximum pages per-zspage
value.

Signed-off-by: Sergey Senozhatsky <senozhatsky@...omium.org>
---
 mm/zsmalloc.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 40a09b1f63b5..5de56f4cd16a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -775,8 +775,9 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
 static int get_pages_per_zspage(struct zs_pool *pool, int class_size)
 {
 	int i, max_usedpc = 0;
-	/* zspage order which gives maximum used size per KB */
-	int max_usedpc_order = 1;
+	/* zspage size which gives maximum used size per KB */
+	int pages_per_zspage = 1;
+	int min_waste = INT_MAX;
 
 	for (i = 1; i <= pool->max_pages_per_zspage; i++) {
 		int zspage_size;
@@ -788,14 +789,19 @@ static int get_pages_per_zspage(struct zs_pool *pool, int class_size)
 
 		if (usedpc > max_usedpc) {
 			max_usedpc = usedpc;
-			max_usedpc_order = i;
+			pages_per_zspage = i;
 		}
 
 		if (usedpc == 100)
 			break;
+
+		if (waste < min_waste) {
+			min_waste = waste;
+			pages_per_zspage = i;
+		}
 	}
 
-	return max_usedpc_order;
+	return pages_per_zspage;
 }
 
 static struct zspage *get_zspage(struct page *page)
-- 
2.38.0.135.g90850a2211-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ