[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140523195728.GA21344@esperanza>
Date: Fri, 23 May 2014 23:57:30 +0400
From: Vladimir Davydov <vdavydov@...allels.com>
To: Christoph Lameter <cl@...two.org>
CC: <hannes@...xchg.org>, <mhocko@...e.cz>,
<akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
Subject: Re: [PATCH RFC 3/3] slub: reparent memcg caches' slabs on memcg
offline
On Fri, May 23, 2014 at 12:45:48PM -0500, Christoph Lameter wrote:
> On Fri, 23 May 2014, Vladimir Davydov wrote:
>
> > On Thu, May 22, 2014 at 02:25:30PM -0500, Christoph Lameter wrote:
> > > slab_free calls __slab_free which can release slabs via
> > > put_cpu_partial()/unfreeze_partials()/discard_slab() to the page
> > > allocator. I'd rather have preemption enabled there.
> >
> > Hmm, why? IMO, calling __free_pages with preempt disabled won't hurt
> > latency, because it proceeds really fast. BTW, we already call it for a
> > bunch of pages from __slab_free() -> put_cpu_partial() ->
> > unfreeze_partials() with irqs disabled, which is harder. FWIW, SLAB has
> > the whole obj free path executed under local_irq_save/restore, and it
> > doesn't bother enabling irqs for freeing pages.
> >
> > IMO, the latency improvement we can achieve by enabling preemption while
> > calling __free_pages is rather minor, and it isn't worth complicating
> > the code.
>
> If you look at the end of unfreeze_partials() you see that we release
> locks and therefore enable preempt before calling into the page allocator.
Yes, we release the node's list_lock before calling discard_slab(), but
we don't enable irqs, which are disabled in put_cpu_partial(), just
before calling it, so we call the page allocator with irqs off and
therefore preemption disabled.
> You never know what other new features they are going to be adding to the
> page allocator. I'd rather be safe than sorry on this one. We have had
> some trouble in the past with some debugging logic triggering.
I guess by "some troubles in the past with some debugging logic
triggering" you mean the issue that was fixed by commit 9ada19342b244 ?
From: Shaohua Li <shaohua.li@...el.com>
slub: move discard_slab out of node lock
Lockdep reports there is potential deadlock for slub node list_lock.
discard_slab() is called with the lock hold in unfreeze_partials(),
which could trigger a slab allocation, which could hold the lock again.
discard_slab() doesn't need hold the lock actually, if the slab is
already removed from partial list.
If so - nothing to worry about, because I'm not going to make calls to
the page allocator under an internal slab lock. What I propose is
calling __free_pages with preempt disabled, which already happens here
and there and can't result in deadlocks or lockdep warns.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists