lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190128143535.7767c397@imladris.surriel.com>
Date:   Mon, 28 Jan 2019 14:35:35 -0500
From:   Rik van Riel <riel@...riel.com>
To:     linux-kernel@...r.kernel.org
Cc:     linux-mm@...ck.org, kernel-team@...com,
        Johannes Weiner <hannes@...xchg.org>, Chris Mason <clm@...com>,
        Roman Gushchin <guro@...com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...e.com>
Subject: [PATCH] mm,slab,vmscan: accumulate gradual pressure on small slabs

There are a few issues with the way the number of slab objects to
scan is calculated in do_shrink_slab.  First, for zero-seek slabs,
we could leave the last object around forever. That could result
in pinning a dying cgroup into memory, instead of reclaiming it.
The fix for that is trivial.

Secondly, small slabs receive much more pressure, relative to their
size, than larger slabs, due to "rounding up" the minimum number of
scanned objects to batch_size.

We can keep the pressure on all slabs equal relative to their size
by accumulating the scan pressure on small slabs over time, resulting
in sometimes scanning an object, instead of always scanning several.

This results in lower system CPU use, and a lower major fault rate,
as actively used entries from smaller caches get reclaimed less
aggressively, and need to be reloaded/recreated less often.

Fixes: 4b85afbdacd2 ("mm: zero-seek shrinkers")
Fixes: 172b06c32b94 ("mm: slowly shrink slabs with a relatively small number of objects")
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Chris Mason <clm@...com>
Cc: Roman Gushchin <guro@...com>
Cc: kernel-team@...com
Tested-by: Chris Mason <clm@...com>
---
 include/linux/shrinker.h |  1 +
 mm/vmscan.c              | 16 +++++++++++++---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 9443cafd1969..7a9a1a0f935c 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -65,6 +65,7 @@ struct shrinker {
 
 	long batch;	/* reclaim batch size, 0 = default */
 	int seeks;	/* seeks to recreate an obj */
+	int small_scan;	/* accumulate pressure on slabs with few objects */
 	unsigned flags;
 
 	/* These are for internal use */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a714c4f800e9..0e375bd7a8b6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -488,18 +488,28 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 		 * them aggressively under memory pressure to keep
 		 * them from causing refetches in the IO caches.
 		 */
-		delta = freeable / 2;
+		delta = (freeable + 1)/ 2;
 	}
 
 	/*
 	 * Make sure we apply some minimal pressure on default priority
-	 * even on small cgroups. Stale objects are not only consuming memory
+	 * even on small cgroups, by accumulating pressure across multiple
+	 * slab shrinker runs. Stale objects are not only consuming memory
 	 * by themselves, but can also hold a reference to a dying cgroup,
 	 * preventing it from being reclaimed. A dying cgroup with all
 	 * corresponding structures like per-cpu stats and kmem caches
 	 * can be really big, so it may lead to a significant waste of memory.
 	 */
-	delta = max_t(unsigned long long, delta, min(freeable, batch_size));
+	if (!delta) {
+		shrinker->small_scan += freeable;
+
+		delta = shrinker->small_scan >> priority;
+		shrinker->small_scan -= delta << priority;
+
+		delta *= 4;
+		do_div(delta, shrinker->seeks);
+
+	}
 
 	total_scan += delta;
 	if (total_scan < 0) {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ