lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180831203450.2536-1-guro@fb.com>
Date:   Fri, 31 Aug 2018 13:34:50 -0700
From:   Roman Gushchin <guro@...com>
To:     <linux-mm@...ck.org>
CC:     <linux-kernel@...r.kernel.org>, <kernel-team@...com>,
        Roman Gushchin <guro@...com>, Josef Bacik <jbacik@...com>,
        Johannes Weiner <hannes@...xchg.org>,
        Rik van Riel <riel@...riel.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: [PATCH] mm: slowly shrink slabs with a relatively small number of objects

Commit 9092c71bb724 ("mm: use sc->priority for slab shrink targets")
changed the way how target the slab pressure is calculated and
made it priority-based:

    delta = freeable >> priority;
    delta *= 4;
    do_div(delta, shrinker->seeks);

The problem is that on a default priority (which is 12) no pressure
is applied at all, if the number of potentially reclaimable objects
is less than 4096.

It wouldn't be a big deal, if only these objects were not pinning the
corresponding dying memory cgroups. 4096 dentries/inodes/radix tree
nodes/... is a reasonable number, but 4096 dying cgroups is not.

If there are no big spikes in memory pressure, and new memory cgroups
are created and destroyed periodically, this causes the number of
dying cgroups grow steadily, causing a slow-ish and hard-to-detect
memory "leak". It's not a real leak, as the memory can be eventually
reclaimed, but it could not happen in a real life at all. I've seen
hosts with a steadily climbing number of dying cgroups, which doesn't
show any signs of a decline in months, despite the host is loaded
with a production workload.

It is an obvious waste of memory, and to prevent it, let's apply
a minimal pressure even on small shrinker lists. E.g. if there are
freeable objects, let's scan at least min(freeable, scan_batch)
objects.

This fix significantly improves a chance of a dying cgroup to be
reclaimed, and together with some previous patches stops the steady
growth of the dying cgroups number on some of our hosts.

Signed-off-by: Roman Gushchin <guro@...com>
Cc: Josef Bacik <jbacik@...com>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Rik van Riel <riel@...riel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
---
 mm/vmscan.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index fa2c150ab7b9..c910cf6bf606 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -476,6 +476,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
 	delta = freeable >> priority;
 	delta *= 4;
 	do_div(delta, shrinker->seeks);
+
+	if (delta == 0 && freeable > 0)
+		delta = min(freeable, batch_size);
+
 	total_scan += delta;
 	if (total_scan < 0) {
 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ