[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D0B79E2.6040108@kernel.org>
Date: Fri, 17 Dec 2010 15:55:30 +0100
From: Tejun Heo <tj@...nel.org>
To: Christoph Lameter <cl@...ux.com>
CC: akpm@...ux-foundation.org, Pekka Enberg <penberg@...helsinki.fi>,
linux-kernel@...r.kernel.org,
Eric Dumazet <eric.dumazet@...il.com>,
"H. Peter Anvin" <hpa@...or.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [cpuops cmpxchg V2 1/5] percpu: Generic this_cpu_cmpxchg() and
this_cpu_xchg support
Hello, Christoph.
On 12/14/2010 05:28 PM, Christoph Lameter wrote:
> Index: linux-2.6/include/linux/percpu.h
> ===================================================================
> --- linux-2.6.orig/include/linux/percpu.h 2010-12-08 13:16:22.000000000 -0600
> +++ linux-2.6/include/linux/percpu.h 2010-12-08 14:43:46.000000000 -0600
> @@ -242,21 +242,21 @@ extern void __bad_size_call_parameter(vo
>
> #define __pcpu_size_call_return2(stem, pcp, ...) \
> ({ \
> - typeof(pcp) ret__; \
> + typeof(pcp) pscr2_ret__; \
> __verify_pcpu_ptr(&(pcp)); \
> switch(sizeof(pcp)) { \
> - case 1: ret__ = stem##1(pcp, __VA_ARGS__); \
> + case 1: pscr2_ret__ = stem##1(pcp, __VA_ARGS__); \
> break; \
> - case 2: ret__ = stem##2(pcp, __VA_ARGS__); \
> + case 2: pscr2_ret__ = stem##2(pcp, __VA_ARGS__); \
> break; \
> - case 4: ret__ = stem##4(pcp, __VA_ARGS__); \
> + case 4: pscr2_ret__ = stem##4(pcp, __VA_ARGS__); \
> break; \
> - case 8: ret__ = stem##8(pcp, __VA_ARGS__); \
> + case 8: pscr2_ret__ = stem##8(pcp, __VA_ARGS__); \
> break; \
> default: \
> __bad_size_call_parameter();break; \
> } \
> - ret__; \
> + pscr2_ret__; \
> })
This chunk doesn't belong here. It's the change I've made while
applying your earlier patch. Dropping this part.
Relocated xchg and cmpxchg ops so that they're first grouped by
preemption safeness and put them after this_cpu_add_return() and
friends.
> * IRQ safe versions of the per cpu RMW operations. Note that these operations
> * are *not* safe against modification of the same variable from another
> * processors (which one gets when using regular atomic operations)
> - . They are guaranteed to be atomic vs. local interrupts and
> + * They are guaranteed to be atomic vs. local interrupts and
Noted this in the patch description.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists