lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri,  5 Apr 2024 01:35:47 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	Yosry Ahmed <yosryahmed@...gle.com>
Subject: [PATCH v1 5/5] mm: zswap: do not check the global limit for
 same-filled pages

When storing same-filled pages, there is no point of checking the global
zswap limit as storing them does not contribute toward the limit Move
the limit checking after same-filled pages are handled.

This avoids having same-filled pages skip zswap and go to disk swap if
the limit is hit. It also avoids queueing the shrink worker, which may
end up being unnecessary if the zswap usage goes down on its own before
another store is attempted.

Ignoring the memcg limits as well for same-filled pages is more
controversial. Those limits are more a matter of per-workload policy.
Some workloads disable zswap completely by setting memory.zswap.max = 0,
and those workloads could start observing some zswap activity even after
disabling zswap. Although harmless, this could cause confusion to
userspace. Remain conservative and keep respecting those limits.

Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
---
 mm/zswap.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index a85c9235d19d3..8763a1e938441 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1404,6 +1404,7 @@ bool zswap_store(struct folio *folio)
 	struct zswap_entry *entry, *old;
 	struct obj_cgroup *objcg = NULL;
 	struct mem_cgroup *memcg = NULL;
+	bool same_filled = false;
 	unsigned long value;
 
 	VM_WARN_ON_ONCE(!folio_test_locked(folio));
@@ -1427,7 +1428,8 @@ bool zswap_store(struct folio *folio)
 		mem_cgroup_put(memcg);
 	}
 
-	if (zswap_check_full())
+	same_filled = zswap_is_folio_same_filled(folio, &value);
+	if (!same_filled && zswap_check_full())
 		goto reject;
 
 	/* allocate entry */
@@ -1437,7 +1439,7 @@ bool zswap_store(struct folio *folio)
 		goto reject;
 	}
 
-	if (zswap_is_folio_same_filled(folio, &value)) {
+	if (same_filled) {
 		entry->length = 0;
 		entry->value = value;
 		atomic_inc(&zswap_same_filled_pages);
-- 
2.44.0.478.gd926399ef9-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ