lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230223030451.543162-2-senozhatsky@chromium.org>
Date:   Thu, 23 Feb 2023 12:04:46 +0900
From:   Sergey Senozhatsky <senozhatsky@...omium.org>
To:     Minchan Kim <minchan@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Yosry Ahmed <yosryahmed@...gle.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: [PATCHv2 1/6] zsmalloc: remove insert_zspage() ->inuse optimization

This optimization has no effect. It only ensures that
when a page was added to its corresponding fullness
list, its "inuse" counter was higher or lower than the
"inuse" counter of the page at the head of the list.
The intention was to keep busy pages at the head, so
they could be filled up and moved to the ZS_FULL
fullness group more quickly. However, this doesn't work
as the "inuse" counter of a page can be modified by
obj_free() but the page may still belong to the same
fullness list. So, fix_fullness_group() won't change
the page's position in relation to the head's "inuse"
counter, leading to a largely random order of pages
within the fullness list.

For instance, consider a printout of the "inuse"
counters of the first 10 pages in a class that holds
93 objects per zspage:

 ZS_ALMOST_EMPTY:  36  67  68  64  35  54  63  52

As we can see the page with the lowest "inuse" counter
is actually the head of the fullness list.

Signed-off-by: Sergey Senozhatsky <senozhatsky@...omium.org>
---
 mm/zsmalloc.c | 29 ++++++++---------------------
 1 file changed, 8 insertions(+), 21 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3aed46ab7e6c..b57a89ed6f30 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -753,37 +753,24 @@ static enum fullness_group get_fullness_group(struct size_class *class,
 }
 
 /*
- * Each size class maintains various freelists and zspages are assigned
- * to one of these freelists based on the number of live objects they
- * have. This functions inserts the given zspage into the freelist
- * identified by <class, fullness_group>.
+ * This function adds the given zspage to the fullness list identified
+ * by <class, fullness_group>.
  */
 static void insert_zspage(struct size_class *class,
-				struct zspage *zspage,
-				enum fullness_group fullness)
+			  struct zspage *zspage,
+			  enum fullness_group fullness)
 {
-	struct zspage *head;
-
 	class_stat_inc(class, fullness, 1);
-	head = list_first_entry_or_null(&class->fullness_list[fullness],
-					struct zspage, list);
-	/*
-	 * We want to see more ZS_FULL pages and less almost empty/full.
-	 * Put pages with higher ->inuse first.
-	 */
-	if (head && get_zspage_inuse(zspage) < get_zspage_inuse(head))
-		list_add(&zspage->list, &head->list);
-	else
-		list_add(&zspage->list, &class->fullness_list[fullness]);
+	list_add(&zspage->list, &class->fullness_list[fullness]);
 }
 
 /*
- * This function removes the given zspage from the freelist identified
+ * This function removes the given zspage from the fullness list identified
  * by <class, fullness_group>.
  */
 static void remove_zspage(struct size_class *class,
-				struct zspage *zspage,
-				enum fullness_group fullness)
+			  struct zspage *zspage,
+			  enum fullness_group fullness)
 {
 	VM_BUG_ON(list_empty(&class->fullness_list[fullness]));
 
-- 
2.39.2.637.g21b0678d19-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ