[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091218001357.GA30450@Krystal>
Date: Thu, 17 Dec 2009 19:13:58 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: Tejun Heo <tj@...nel.org>, linux-kernel@...r.kernel.org,
Mel Gorman <mel@....ul.ie>,
Pekka Enberg <penberg@...helsinki.fi>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [this_cpu_xx V7 0/8] Per cpu atomics in core allocators and
cleanup
* Christoph Lameter (cl@...ux-foundation.org) wrote:
> On Thu, 17 Dec 2009, Mathieu Desnoyers wrote:
>
> > Some quick test on my Intel Xeon E5405:
> >
> > local cmpxchg: 14 cycles
> > xchg: 18 cycles
> >
> > So yes, indeed, the non-LOCK prefixed local cmpxchg seems a bit faster
> > than the xchg, given the latter has an implied LOCK prefix.
> >
> > Code used for local cmpxchg:
> > old = var;
> > do {
> > ret = cmpxchg_local(&var, old, 4);
> > if (likely(ret == old))
> > break;
> > old = ret;
> > } while (1);
> >
>
> Great. Could you also put that into "patch-format"?
>
Sure, can you point me to a git tree I should work on top of which
includes the per cpu infrastructure to extend ?
Mathieu
--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists