[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101124195606.GA18766@Krystal>
Date: Wed, 24 Nov 2010 14:56:06 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Jeremy Fitzhardinge <jeremy@...p.org>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>, akpm@...ux-foundation.org,
Pekka Enberg <penberg@...helsinki.fi>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org,
Eric Dumazet <eric.dumazet@...il.com>,
Tejun Heo <tj@...nel.org>
Subject: Re: [thiscpuops upgrade 10/10] Lockless (and preemptless)
fastpaths for slub
* Jeremy Fitzhardinge (jeremy@...p.org) wrote:
> On 11/24/2010 08:17 AM, Christoph Lameter wrote:
> > On Wed, 24 Nov 2010, Pekka Enberg wrote:
> >
> >>> + /*
> >>> + * The transaction ids are globally unique per cpu and per operation on
> >>> + * a per cpu queue. Thus they can be guarantee that the cmpxchg_double
> >>> + * occurs on the right processor and that there was no operation on the
> >>> + * linked list in between.
> >>> + */
> >>> + tid = c->tid;
> >>> + barrier();
> >> You're using a compiler barrier after every load from c->tid. Why?
> > To make sure that the compiler does not do something like loading the tid
> > later. The tid must be obtained before the rest of the information from
> > the per cpu slab data is retrieved in order to ensure that we have a
> > consistent set of data to operate on.
>
> Isn't that best expressed with ACCESS_ONCE()?
ACCESS_ONCE() use of volatile only ensures that volatile accesses are not
reordered wrt each other. It does not ensure anything about other unrelated
memory accesses (which we want to make sure the compiler won't move before the
c->tid read). The compiler barrier() is really what seems to be needed here.
Thanks,
Mathieu
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists