[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190128194502.GA30061@castle.DHCP.thefacebook.com>
Date: Mon, 28 Jan 2019 19:45:09 +0000
From: Roman Gushchin <guro@...com>
To: Rik van Riel <riel@...riel.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Kernel Team <Kernel-team@...com>,
Johannes Weiner <hannes@...xchg.org>, Chris Mason <clm@...com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
"hange-folder>?" <toggle-mailboxes@...tle.dhcp.thefacebook.com>
Subject: Re: [PATCH] mm,slab,vmscan: accumulate gradual pressure on small
slabs
On Mon, Jan 28, 2019 at 02:35:35PM -0500, Rik van Riel wrote:
> There are a few issues with the way the number of slab objects to
> scan is calculated in do_shrink_slab. First, for zero-seek slabs,
> we could leave the last object around forever. That could result
> in pinning a dying cgroup into memory, instead of reclaiming it.
> The fix for that is trivial.
>
> Secondly, small slabs receive much more pressure, relative to their
> size, than larger slabs, due to "rounding up" the minimum number of
> scanned objects to batch_size.
>
> We can keep the pressure on all slabs equal relative to their size
> by accumulating the scan pressure on small slabs over time, resulting
> in sometimes scanning an object, instead of always scanning several.
>
> This results in lower system CPU use, and a lower major fault rate,
> as actively used entries from smaller caches get reclaimed less
> aggressively, and need to be reloaded/recreated less often.
>
> Fixes: 4b85afbdacd2 ("mm: zero-seek shrinkers")
> Fixes: 172b06c32b94 ("mm: slowly shrink slabs with a relatively small number of objects")
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Chris Mason <clm@...com>
> Cc: Roman Gushchin <guro@...com>
> Cc: kernel-team@...com
> Tested-by: Chris Mason <clm@...com>
Hi, Rik!
There is a couple of formatting issues (see below), but other than that
the patch looks very good to me. Thanks!
Acked-by: Roman Gushchin <guro@...com>
> ---
> include/linux/shrinker.h | 1 +
> mm/vmscan.c | 16 +++++++++++++---
> 2 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> index 9443cafd1969..7a9a1a0f935c 100644
> --- a/include/linux/shrinker.h
> +++ b/include/linux/shrinker.h
> @@ -65,6 +65,7 @@ struct shrinker {
>
> long batch; /* reclaim batch size, 0 = default */
> int seeks; /* seeks to recreate an obj */
> + int small_scan; /* accumulate pressure on slabs with few objects */
> unsigned flags;
>
> /* These are for internal use */
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a714c4f800e9..0e375bd7a8b6 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -488,18 +488,28 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> * them aggressively under memory pressure to keep
> * them from causing refetches in the IO caches.
> */
> - delta = freeable / 2;
> + delta = (freeable + 1)/ 2;
^
A space is missing here.
> }
>
> /*
> * Make sure we apply some minimal pressure on default priority
> - * even on small cgroups. Stale objects are not only consuming memory
> + * even on small cgroups, by accumulating pressure across multiple
> + * slab shrinker runs. Stale objects are not only consuming memory
> * by themselves, but can also hold a reference to a dying cgroup,
> * preventing it from being reclaimed. A dying cgroup with all
> * corresponding structures like per-cpu stats and kmem caches
> * can be really big, so it may lead to a significant waste of memory.
> */
> - delta = max_t(unsigned long long, delta, min(freeable, batch_size));
> + if (!delta) {
> + shrinker->small_scan += freeable;
> +
> + delta = shrinker->small_scan >> priority;
> + shrinker->small_scan -= delta << priority;
> +
> + delta *= 4;
> + do_div(delta, shrinker->seeks);
> +
This empty line can be removed, I believe.
> + }
>
> total_scan += delta;
> if (total_scan < 0) {
>
Powered by blists - more mailing lists