[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180324201109.r4udxibbg4t23apg@esperanza>
Date: Sat, 24 Mar 2018 23:11:10 +0300
From: Vladimir Davydov <vdavydov.dev@...il.com>
To: Kirill Tkhai <ktkhai@...tuozzo.com>
Cc: viro@...iv.linux.org.uk, hannes@...xchg.org, mhocko@...nel.org,
akpm@...ux-foundation.org, tglx@...utronix.de,
pombredanne@...b.com, stummala@...eaurora.org,
gregkh@...uxfoundation.org, sfr@...b.auug.org.au, guro@...com,
mka@...omium.org, penguin-kernel@...ove.SAKURA.ne.jp,
chris@...is-wilson.co.uk, longman@...hat.com, minchan@...nel.org,
hillf.zj@...baba-inc.com, ying.huang@...el.com,
mgorman@...hsingularity.net, shakeelb@...gle.com, jbacik@...com,
linux@...ck-us.net, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, willy@...radead.org
Subject: Re: [PATCH 09/10] mm: Iterate only over charged shrinkers during
memcg shrink_slab()
On Wed, Mar 21, 2018 at 04:22:51PM +0300, Kirill Tkhai wrote:
> Using the preparations made in previous patches, in case of memcg
> shrink, we may avoid shrinkers, which are not set in memcg's shrinkers
> bitmap. To do that, we separate iterations over memcg-aware and
> !memcg-aware shrinkers, and memcg-aware shrinkers are chosen
> via for_each_set_bit() from the bitmap. In case of big nodes,
> having many isolated environments, this gives significant
> performance growth. See next patch for the details.
>
> Note, that the patch does not respect to empty memcg shrinkers,
> since we never clear the bitmap bits after we set it once.
> Their shrinkers will be called again, with no shrinked objects
> as result. This functionality is provided by next patch.
>
> Signed-off-by: Kirill Tkhai <ktkhai@...tuozzo.com>
> ---
> mm/vmscan.c | 54 +++++++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 41 insertions(+), 13 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 265cf069b470..e1fd16bc7a9b 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -327,6 +327,8 @@ static int alloc_shrinker_id(struct shrinker *shrinker)
>
> if (!(shrinker->flags & SHRINKER_MEMCG_AWARE))
> return 0;
> + BUG_ON(!(shrinker->flags & SHRINKER_NUMA_AWARE));
> +
> retry:
> ida_pre_get(&bitmap_id_ida, GFP_KERNEL);
> down_write(&bitmap_rwsem);
> @@ -366,7 +368,8 @@ static void add_shrinker(struct shrinker *shrinker)
> down_write(&shrinker_rwsem);
> if (shrinker->flags & SHRINKER_MEMCG_AWARE)
> mcg_shrinkers[shrinker->id] = shrinker;
> - list_add_tail(&shrinker->list, &shrinker_list);
> + else
> + list_add_tail(&shrinker->list, &shrinker_list);
I don't think we should remove per-memcg shrinkers from the global
shrinker list - this is confusing. It won't be critical if we iterate
over all shrinkers on global reclaim, will it?
> up_write(&shrinker_rwsem);
> }
>
> @@ -701,6 +705,39 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
> if (!down_read_trylock(&shrinker_rwsem))
> goto out;
>
> +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
> + if (!memcg_kmem_enabled() || memcg) {
> + struct shrinkers_map *map;
> + int i;
> +
> + map = rcu_dereference_protected(SHRINKERS_MAP(memcg), true);
> + if (map) {
> + for_each_set_bit(i, map->map[nid], bitmap_nr_ids) {
> + struct shrink_control sc = {
> + .gfp_mask = gfp_mask,
> + .nid = nid,
> + .memcg = memcg,
> + };
> +
> + shrinker = mcg_shrinkers[i];
> + if (!shrinker) {
> + clear_bit(i, map->map[nid]);
> + continue;
> + }
> + freed += do_shrink_slab(&sc, shrinker, priority);
> +
> + if (rwsem_is_contended(&shrinker_rwsem)) {
> + freed = freed ? : 1;
> + goto unlock;
> + }
> + }
> + }
> +
> + if (memcg_kmem_enabled() && memcg)
> + goto unlock;
May be, factor this out to a separate function, say shrink_slab_memcg?
Just for the sake of code legibility.
Powered by blists - more mailing lists