[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1011241204440.20545@router.home>
Date: Wed, 24 Nov 2010 12:08:59 -0600 (CST)
From: Christoph Lameter <cl@...ux.com>
To: Peter Zijlstra <peterz@...radead.org>
cc: akpm@...ux-foundation.org, Pekka Enberg <penberg@...helsinki.fi>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
Eric Dumazet <eric.dumazet@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Tejun Heo <tj@...nel.org>
Subject: Re: [thiscpuops upgrade 10/10] Lockless (and preemptless) fastpaths
for slub
On Wed, 24 Nov 2010, Peter Zijlstra wrote:
> On Wed, 2010-11-24 at 10:14 -0600, Christoph Lameter wrote:
> > On Wed, 24 Nov 2010, Peter Zijlstra wrote:
> >
> > > This thing still relies on disabling IRQs in the slow path, which means
> > > its still going to be a lot of work to make it work on -rt.
> >
> > The disabling of irqs is because slab operations are used from interrupt
> > context. If we can avoid slab operations from interrupt contexts then we
> > can drop the interrupt disable in the slab allocators.
>
> That's not so much the point, there's per-cpu assumptions due to that.
Sure the exclusive access to per cpu area is exploited during
irq off sections. That would have to change.
> Not everything is under a proper lock, see for example this bit:
>
> new = new_slab(s, gfpflags, node);
>
> if (gfpflags & __GFP_WAIT)
> local_irq_disable();
>
> if (new) {
> c = __this_cpu_ptr(s->cpu_slab);
> stat(s, ALLOC_SLAB);
> if (c->page)
> flush_slab(s, c);
> slab_lock(new);
> __SetPageSlubFrozen(new);
> c->page = new;
> goto load_freelist;
> }
>
> There we have the __this_cpu_ptr, c->page deref and flush_slab()->stat()
> call all before we take a lock.
All per cpu data and therefore would have get different treatment
if we wanted to drop the processing in interrupt off mode.
Disabling preempt may be initially sufficient there.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists