[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170118075448.GA1255@js1304-P5Q-DELUXE>
Date: Wed, 18 Jan 2017 16:54:48 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Tejun Heo <tj@...nel.org>
Cc: vdavydov.dev@...il.com, cl@...ux.com, penberg@...nel.org,
rientjes@...gle.com, akpm@...ux-foundation.org, jsvana@...com,
hannes@...xchg.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, cgroups@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCHSET v2] slab: make memcg slab destruction scalable
On Tue, Jan 17, 2017 at 08:49:13AM -0800, Tejun Heo wrote:
> Hello,
>
> On Tue, Jan 17, 2017 at 09:12:57AM +0900, Joonsoo Kim wrote:
> > Could you confirm that your series solves the problem that is reported
> > by Doug? It would be great if the result is mentioned to the patch
> > description.
> >
> > https://bugzilla.kernel.org/show_bug.cgi?id=172991
>
> So, that's an issue in the creation path which is already resolved by
> switching to an ordered workqueue (it'd probably be better to use
> per-cpu wq w/ @max_active == 1 tho). This patchset is about relesae
> path. slab_mutex contention would definitely go down with this but
> I don't think there's more connection to it than that.
That problem is caused by slow release path and then contention on the
slab_mutex. With an ordered workqueue, kworker would not be created a
lot but it can be possible that a lot of work items to create a new
cache for memcg is pending for a long time due to slow release path.
Your patchset replaces optimization for release path so it's better to
check that the work isn't pending for a long time in above workload.
Thanks.
Powered by blists - more mailing lists