[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180703135813.ed4eef6a4a2df32fa1085e4c@linux-foundation.org>
Date: Tue, 3 Jul 2018 13:58:13 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Kirill Tkhai <ktkhai@...tuozzo.com>
Cc: vdavydov.dev@...il.com, shakeelb@...gle.com,
viro@...iv.linux.org.uk, hannes@...xchg.org, mhocko@...nel.org,
tglx@...utronix.de, pombredanne@...b.com, stummala@...eaurora.org,
gregkh@...uxfoundation.org, sfr@...b.auug.org.au, guro@...com,
mka@...omium.org, penguin-kernel@...ove.SAKURA.ne.jp,
chris@...is-wilson.co.uk, longman@...hat.com, minchan@...nel.org,
ying.huang@...el.com, mgorman@...hsingularity.net, jbacik@...com,
linux@...ck-us.net, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, willy@...radead.org, lirongqing@...du.com,
aryabinin@...tuozzo.com
Subject: Re: [PATCH v8 14/17] mm: Iterate only over charged shrinkers during
memcg shrink_slab()
On Tue, 03 Jul 2018 18:11:06 +0300 Kirill Tkhai <ktkhai@...tuozzo.com> wrote:
> Using the preparations made in previous patches, in case of memcg
> shrink, we may avoid shrinkers, which are not set in memcg's shrinkers
> bitmap. To do that, we separate iterations over memcg-aware and
> !memcg-aware shrinkers, and memcg-aware shrinkers are chosen
> via for_each_set_bit() from the bitmap. In case of big nodes,
> having many isolated environments, this gives significant
> performance growth. See next patches for the details.
>
> Note, that the patch does not respect to empty memcg shrinkers,
> since we never clear the bitmap bits after we set it once.
> Their shrinkers will be called again, with no shrinked objects
> as result. This functionality is provided by next patches.
>
> ...
>
> @@ -541,6 +555,67 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> return freed;
> }
>
> +#ifdef CONFIG_MEMCG_KMEM
> +static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
> + struct mem_cgroup *memcg, int priority)
> +{
> + struct memcg_shrinker_map *map;
> + unsigned long freed = 0;
> + int ret, i;
> +
> + if (!memcg_kmem_enabled() || !mem_cgroup_online(memcg))
> + return 0;
> +
> + if (!down_read_trylock(&shrinker_rwsem))
> + return 0;
Why trylock? Presumably some other code path is known to hold the lock
for long periods? Dunno. Comment it, please.
> + /*
> + * 1) Caller passes only alive memcg, so map can't be NULL.
> + * 2) shrinker_rwsem protects from maps expanding.
> + */
> + map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map,
> + true);
> + BUG_ON(!map);
> +
> + for_each_set_bit(i, map->map, shrinker_nr_max) {
> + struct shrink_control sc = {
> + .gfp_mask = gfp_mask,
> + .nid = nid,
> + .memcg = memcg,
> + };
> + struct shrinker *shrinker;
> +
> + shrinker = idr_find(&shrinker_idr, i);
> + if (unlikely(!shrinker)) {
> + clear_bit(i, map->map);
> + continue;
> + }
> + BUG_ON(!(shrinker->flags & SHRINKER_MEMCG_AWARE));
Fair enough as a development-time sanity check but we shouldn't need
this in production code. Or make it VM_BUG_ON(), at least.
> + /* See comment in prealloc_shrinker() */
> + if (unlikely(list_empty(&shrinker->list)))
> + continue;
> +
> + ret = do_shrink_slab(&sc, shrinker, priority);
> + freed += ret;
> +
> + if (rwsem_is_contended(&shrinker_rwsem)) {
> + freed = freed ? : 1;
> + break;
> + }
> + }
> +
> + up_read(&shrinker_rwsem);
> + return freed;
> +}
Powered by blists - more mailing lists