[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20191125180744.GA9800@localhost.localdomain>
Date: Mon, 25 Nov 2019 18:07:50 +0000
From: Roman Gushchin <guro@...com>
To: Christian Borntraeger <borntraeger@...ibm.com>
CC: "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"hannes@...xchg.org" <hannes@...xchg.org>,
Kernel Team <Kernel-team@...com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"longman@...hat.com" <longman@...hat.com>,
"shakeelb@...gle.com" <shakeelb@...gle.com>,
"vdavydov.dev@...il.com" <vdavydov.dev@...il.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Janosch Frank <frankja@...ux.ibm.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>
Subject: Re: WARNING bisected (was Re: [PATCH v7 08/10] mm: rework non-root
kmem_cache lifecycle management)
On Mon, Nov 25, 2019 at 09:00:56AM +0100, Christian Borntraeger wrote:
>
>
> On 24.11.19 01:39, Roman Gushchin wrote:
> > On Fri, Nov 22, 2019 at 05:28:46PM +0100, Christian Borntraeger wrote:
> >> On 21.11.19 19:45, Roman Gushchin wrote:
> >>> I see. Do you know, which kmem_cache it is? If not, can you, please,
> >>> figure it out?
> >>
> >> The release function for that ref is kmemcg_cache_shutdown.
> >>
> >
> > Hi Christian!
> >
> > Can you, please, test if the following patch resolves the problem?
>
> Yes, it does.
Thanks for testing it!
I'll send the patch shortly.
>
>
> >
> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > index 8afa188f6e20..628e5f0ee19e 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -888,6 +888,8 @@ static int shutdown_memcg_caches(struct kmem_cache *s)
> >
> > static void flush_memcg_workqueue(struct kmem_cache *s)
> > {
> > + bool wait_for_children;
> > +
> > spin_lock_irq(&memcg_kmem_wq_lock);
> > s->memcg_params.dying = true;
> > spin_unlock_irq(&memcg_kmem_wq_lock);
> > @@ -904,6 +906,13 @@ static void flush_memcg_workqueue(struct kmem_cache *s)
> > * previous workitems on workqueue are processed.
> > */
> > flush_workqueue(memcg_kmem_cache_wq);
> > +
> > + mutex_lock(&slab_mutex);
> > + wait_for_children = !list_empty(&s->memcg_params.children);
> > + mutex_unlock(&slab_mutex);
>
> Not sure if (for reading) we really need the mutex.
Good point!
At this moment the list of children caches can't grow, only shrink.
So if we're reading it without the slab mutex, the worst thing can
happen is that we'll make an excessive rcu_barrier() call.
Which is fine given that resulting code looks much simpler.
Thanks!
Powered by blists - more mailing lists