[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <002b01d21fea$fb0bab60$f1230220$@net>
Date: Thu, 6 Oct 2016 09:02:00 -0700
From: "Doug Smythies" <dsmythies@...us.net>
To: <js1304@...il.com>, "'Andrew Morton'" <akpm@...ux-foundation.org>
Cc: "'Christoph Lameter'" <cl@...ux.com>,
"'Pekka Enberg'" <penberg@...nel.org>,
"'David Rientjes'" <rientjes@...gle.com>,
"'Johannes Weiner'" <hannes@...xchg.org>,
"'Vladimir Davydov'" <vdavydov.dev@...il.com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
"'Joonsoo Kim'" <iamjoonsoo.kim@....com>, <stable@...r.kernel.org>
Subject: RE: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
It was my (limited) understanding that the subsequent 2 patch set
superseded this patch. Indeed, the 2 patch set seems to solve
both the SLAB and SLUB bug reports.
References:
https://bugzilla.kernel.org/show_bug.cgi?id=172981
https://bugzilla.kernel.org/show_bug.cgi?id=172991
https://patchwork.kernel.org/patch/9361853
https://patchwork.kernel.org/patch/9359271
On 2016.10.05 23:21 Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@....com>
>
> There is a bug report that SLAB makes extreme load average due to
> over 2000 kworker thread.
>
> https://bugzilla.kernel.org/show_bug.cgi?id=172981
>
> This issue is caused by kmemcg feature that try to create new set of
> kmem_caches for each memcg. Recently, kmem_cache creation is slowed by
> synchronize_sched() and futher kmem_cache creation is also delayed
> since kmem_cache creation is synchronized by a global slab_mutex lock.
> So, the number of kworker that try to create kmem_cache increases quitely.
> synchronize_sched() is for lockless access to node's shared array but
> it's not needed when a new kmem_cache is created. So, this patch
> rules out that case.
>
> Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache")
> Cc: stable@...r.kernel.org
> Reported-by: Doug Smythies <dsmythies@...us.net>
> Tested-by: Doug Smythies <dsmythies@...us.net>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
> ---
> mm/slab.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index 6508b4d..3c83c29 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -961,7 +961,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> * guaranteed to be valid until irq is re-enabled, because it will be
> * freed after synchronize_sched().
> */
> - if (force_change)
> + if (old_shared && force_change)
> synchronize_sched();
>
> fail:
> --
> 1.9.1
Powered by blists - more mailing lists