lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230304034835.2082479-2-senozhatsky@chromium.org>
Date:   Sat,  4 Mar 2023 12:48:32 +0900
From:   Sergey Senozhatsky <senozhatsky@...omium.org>
To:     Minchan Kim <minchan@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Yosry Ahmed <yosryahmed@...gle.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: [PATCHv4 1/4] zsmalloc: remove insert_zspage() ->inuse optimization

This optimization has no effect. It only ensures that
when a zspage was added to its corresponding fullness
list, its "inuse" counter was higher or lower than the
"inuse" counter of the zspage at the head of the list.
The intention was to keep busy zspages at the head, so
they could be filled up and moved to the ZS_FULL
fullness group more quickly. However, this doesn't work
as the "inuse" counter of a zspage can be modified by
obj_free() but the zspage may still belong to the same
fullness list. So, fix_fullness_group() won't change
the zspage's position in relation to the head's "inuse"
counter, leading to a largely random order of zspages
within the fullness list.

For instance, consider a printout of the "inuse"
counters of the first 10 zspages in a class that holds
93 objects per zspage:

 ZS_ALMOST_EMPTY:  36  67  68  64  35  54  63  52

As we can see the zspage with the lowest "inuse" counter
is actually the head of the fullness list.

Remove this pointless "optimisation".

Signed-off-by: Sergey Senozhatsky <senozhatsky@...omium.org>
---
 mm/zsmalloc.c | 13 +------------
 1 file changed, 1 insertion(+), 12 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3aed46ab7e6c..abe0c4d7942d 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -762,19 +762,8 @@ static void insert_zspage(struct size_class *class,
 				struct zspage *zspage,
 				enum fullness_group fullness)
 {
-	struct zspage *head;
-
 	class_stat_inc(class, fullness, 1);
-	head = list_first_entry_or_null(&class->fullness_list[fullness],
-					struct zspage, list);
-	/*
-	 * We want to see more ZS_FULL pages and less almost empty/full.
-	 * Put pages with higher ->inuse first.
-	 */
-	if (head && get_zspage_inuse(zspage) < get_zspage_inuse(head))
-		list_add(&zspage->list, &head->list);
-	else
-		list_add(&zspage->list, &class->fullness_list[fullness]);
+	list_add(&zspage->list, &class->fullness_list[fullness]);
 }
 
 /*
-- 
2.40.0.rc0.216.gc4246ad0f0-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ