[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <cb35065d-b100-533b-04c1-1188a75220a2@linux.vnet.ibm.com>
Date: Mon, 27 Nov 2017 11:16:46 +0530
From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To: Minchan Kim <minchan@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team <kernel-team@....com>,
Michal Hocko <mhocko@...e.com>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
Shakeel Butt <shakeelb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH] mm: Do not stall register_shrinker
On 11/24/2017 05:34 AM, Minchan Kim wrote:
> Shakeel Butt reported, he have observed in production system that
> the job loader gets stuck for 10s of seconds while doing mount
> operation. It turns out that it was stuck in register_shrinker()
> and some unrelated job was under memory pressure and spending time
> in shrink_slab(). Machines have a lot of shrinkers registered and
> jobs under memory pressure has to traverse all of those memcg-aware
> shrinkers and do affect unrelated jobs which want to register their
> own shrinkers.
>
> To solve the issue, this patch simply bails out slab shrinking
> once it found someone want to register shrinker in parallel.
> A downside is it could cause unfair shrinking between shrinkers.
> However, it should be rare and we can add compilcated logic once
> we found it's not enough.
>
> Link: http://lkml.kernel.org/r/20171115005602.GB23810@bbox
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
> Acked-by: Johannes Weiner <hannes@...xchg.org>
> Reported-and-tested-by: Shakeel Butt <shakeelb@...gle.com>
> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
> Signed-off-by: Minchan Kim <minchan@...nel.org>
> ---
> mm/vmscan.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 6a5a72baccd5..6698001787bd 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -486,6 +486,14 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
> sc.nid = 0;
>
> freed += do_shrink_slab(&sc, shrinker, priority);
> + /*
> + * bail out if someone want to register a new shrinker to
> + * prevent long time stall by parallel ongoing shrinking.
> + */
> + if (rwsem_is_contended(&shrinker_rwsem)) {
> + freed = freed ? : 1;
> + break;
> + }
This is similar to when it aborts for not being able to grab the
shrinker_rwsem at the beginning.
if (!down_read_trylock(&shrinker_rwsem)) {
/*
* If we would return 0, our callers would understand that we
* have nothing else to shrink and give up trying. By returning
* 1 we keep it going and assume we'll be able to shrink next
* time.
*/
freed = 1;
goto out;
}
Right now, shrink_slab() is getting called from three places. Twice in
shrink_node() and once in drop_slab_node(). But the return value from
shrink_slab() is checked only inside drop_slab_node() and it has some
heuristics to decide whether to keep on scanning over available memcg
shrinkers registered.
The question is does aborting here will still guarantee forward progress
for all the contexts which might be attempting to allocate memory and had
eventually invoked shrink_slab() ? Because may be the memory allocation
request has more priority than a context getting bit delayed while being
stuck waiting on shrinker_rwsem.
Powered by blists - more mailing lists