[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTikVFWYS9UeCP1DJxhzYn9_rMcrovSfbQpNmXsvk@mail.gmail.com>
Date: Wed, 24 Nov 2010 09:16:18 +0200
From: Pekka Enberg <penberg@...nel.org>
To: Christoph Lameter <cl@...ux.com>
Cc: akpm@...ux-foundation.org, Pekka Enberg <penberg@...helsinki.fi>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org,
Eric Dumazet <eric.dumazet@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Tejun Heo <tj@...nel.org>
Subject: Re: [thiscpuops upgrade 10/10] Lockless (and preemptless) fastpaths
for slub
On Wed, Nov 24, 2010 at 1:51 AM, Christoph Lameter <cl@...ux.com> wrote:
> @@ -1737,23 +1770,53 @@ static __always_inline void *slab_alloc(
> {
> void **object;
> struct kmem_cache_cpu *c;
> - unsigned long flags;
> + unsigned long tid;
>
> if (slab_pre_alloc_hook(s, gfpflags))
> return NULL;
>
> - local_irq_save(flags);
> +redo:
> + /*
> + * Must read kmem_cache cpu data via this cpu ptr. Preemption is
> + * enabled. We may switch back and forth between cpus while
> + * reading from one cpu area. That does not matter as long
> + * as we end up on the original cpu again when doing the cmpxchg.
> + */
> c = __this_cpu_ptr(s->cpu_slab);
> +
> + /*
> + * The transaction ids are globally unique per cpu and per operation on
> + * a per cpu queue. Thus they can be guarantee that the cmpxchg_double
> + * occurs on the right processor and that there was no operation on the
> + * linked list in between.
> + */
> + tid = c->tid;
> + barrier();
You're using a compiler barrier after every load from c->tid. Why?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists