[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190610204639.GA5838@tower.DHCP.thefacebook.com>
Date: Mon, 10 Jun 2019 20:46:47 +0000
From: Roman Gushchin <guro@...com>
To: Vladimir Davydov <vdavydov.dev@...il.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
"Johannes Weiner" <hannes@...xchg.org>,
Shakeel Butt <shakeelb@...gle.com>,
Waiman Long <longman@...hat.com>
Subject: Re: [PATCH v6 07/10] mm: synchronize access to kmem_cache dying flag
using a spinlock
On Sun, Jun 09, 2019 at 05:31:32PM +0300, Vladimir Davydov wrote:
> On Tue, Jun 04, 2019 at 07:44:51PM -0700, Roman Gushchin wrote:
> > Currently the memcg_params.dying flag and the corresponding
> > workqueue used for the asynchronous deactivation of kmem_caches
> > is synchronized using the slab_mutex.
> >
> > It makes impossible to check this flag from the irq context,
> > which will be required in order to implement asynchronous release
> > of kmem_caches.
> >
> > So let's switch over to the irq-save flavor of the spinlock-based
> > synchronization.
> >
> > Signed-off-by: Roman Gushchin <guro@...com>
> > ---
> > mm/slab_common.c | 19 +++++++++++++++----
> > 1 file changed, 15 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > index 09b26673b63f..2914a8f0aa85 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -130,6 +130,7 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t nr,
> > #ifdef CONFIG_MEMCG_KMEM
> >
> > LIST_HEAD(slab_root_caches);
> > +static DEFINE_SPINLOCK(memcg_kmem_wq_lock);
> >
> > void slab_init_memcg_params(struct kmem_cache *s)
> > {
> > @@ -629,6 +630,7 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg,
> > struct memcg_cache_array *arr;
> > struct kmem_cache *s = NULL;
> > char *cache_name;
> > + bool dying;
> > int idx;
> >
> > get_online_cpus();
> > @@ -640,7 +642,13 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg,
> > * The memory cgroup could have been offlined while the cache
> > * creation work was pending.
> > */
> > - if (memcg->kmem_state != KMEM_ONLINE || root_cache->memcg_params.dying)
> > + if (memcg->kmem_state != KMEM_ONLINE)
> > + goto out_unlock;
> > +
> > + spin_lock_irq(&memcg_kmem_wq_lock);
> > + dying = root_cache->memcg_params.dying;
> > + spin_unlock_irq(&memcg_kmem_wq_lock);
> > + if (dying)
> > goto out_unlock;
>
> I do understand why we need to sync setting dying flag for a kmem cache
> about to be destroyed in flush_memcg_workqueue vs checking the flag in
> kmemcg_cache_deactivate: this is needed so that we don't schedule a new
> deactivation work after we flush RCU/workqueue. However, I don't think
> it's necessary to check the dying flag here, in memcg_create_kmem_cache:
> we can't schedule a new cache creation work after kmem_cache_destroy has
> started, because one mustn't allocate from a dead kmem cache; since we
> flush the queue before getting to actual destruction, no cache creation
> work can be pending. Yeah, it might happen that a cache creation work
> starts execution while flush_memcg_workqueue is in progress, but I don't
> see any point in optimizing this case - after all, cache destruction is
> a very cold path. Since checking the flag in memcg_create_kmem_cache
> raises question, I suggest to simply drop this check.
Yeah, I came to the same conclusion (in a thread with Johannes),
that this check is not required. I'll drop it in a separate patch.
>
> Anyway, it would be nice to see some comment in the code explaining why
> we check dying flag under a spin lock in kmemcg_cache_deactivate.
Sure, will add some.
Btw, thank you very much for reviewing the series!
Powered by blists - more mailing lists