[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190528173959.h4hq55b3ajlfpjrk@esperanza>
Date: Tue, 28 May 2019 20:39:59 +0300
From: Vladimir Davydov <vdavydov.dev@...il.com>
To: Waiman Long <longman@...hat.com>
Cc: Roman Gushchin <guro@...com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...com,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Rik van Riel <riel@...riel.com>,
Shakeel Butt <shakeelb@...gle.com>,
Christoph Lameter <cl@...ux.com>, cgroups@...r.kernel.org
Subject: Re: [PATCH v5 5/7] mm: rework non-root kmem_cache lifecycle
management
On Tue, May 28, 2019 at 01:37:50PM -0400, Waiman Long wrote:
> On 5/28/19 1:08 PM, Vladimir Davydov wrote:
> >> static void flush_memcg_workqueue(struct kmem_cache *s)
> >> {
> >> + /*
> >> + * memcg_params.dying is synchronized using slab_mutex AND
> >> + * memcg_kmem_wq_lock spinlock, because it's not always
> >> + * possible to grab slab_mutex.
> >> + */
> >> mutex_lock(&slab_mutex);
> >> + spin_lock(&memcg_kmem_wq_lock);
> >> s->memcg_params.dying = true;
> >> + spin_unlock(&memcg_kmem_wq_lock);
> > I would completely switch from the mutex to the new spin lock -
> > acquiring them both looks weird.
> >
> >> mutex_unlock(&slab_mutex);
> >>
> >> /*
>
> There are places where the slab_mutex is held and sleeping functions
> like kvzalloc() are called. I understand that taking both mutex and
> spinlocks look ugly, but converting all the slab_mutex critical sections
> to spinlock critical sections will be a major undertaking by itself. So
> I would suggest leaving that for now.
I didn't mean that. I meant taking spin_lock wherever we need to access
the 'dying' flag, even if slab_mutex is held. So that we don't need to
take mutex_lock in flush_memcg_workqueue, where it's used solely for
'dying' synchronization.
Powered by blists - more mailing lists