[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52308C52.9080003@zytor.com>
Date: Wed, 11 Sep 2013 08:29:22 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Andi Kleen <ak@...ux.intel.com>,
Mike Galbraith <bitbucket@...ine.de>,
Thomas Gleixner <tglx@...utronix.de>,
Arjan van de Ven <arjan@...ux.intel.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>
Subject: Re: [PATCH 0/7] preempt_count rework -v2
On 09/11/2013 06:13 AM, Peter Zijlstra wrote:
> On Tue, Sep 10, 2013 at 02:43:06PM -0700, Linus Torvalds wrote:
>> That said, looking at your patch, I get the *very* strong feeling that
>> we could make a macro that does all the repetitions for us, and then
>> have a
>>
>> GENERATE_RMW(atomic_sub_and_test, LOCK_PREFIX "subl", "e", "")
>
> The below seems to compile..
>
> +
> +#define GENERATE_ADDcc(var, val, lock, cc) \
> +do { \
> + const int add_ID__ = (__builtin_constant_p(val) && \
> + ((val) == 1 || (val) == -1)) ? (val) : 0; \
> + \
> + switch (sizeof(var)) { \
> + case 4: \
> + if (add_ID__ == 1) { \
> + asm volatile goto(lock "incl %0;" \
> + "j" cc " %l[cc_label]" \
> + : : "m" (var) \
> + : "memory" : cc_label); \
> + } else if (add_ID__ == -1) { \
> + asm volatile goto(lock "decl %0;" \
> + "j" cc " %l[cc_label]" \
> + : : "m" (var) \
> + : "memory" : cc_label); \
> + } else { \
> + asm volatile goto(lock "addl %1, %0;" \
> + "j" cc " %l[cc_label]" \
> + : : "m" (var), "er" (val) \
> + : "memory" : cc_label); \
> + } \
> + break; \
> + \
> + case 8: \
> + if (add_ID__ == 1) { \
> + asm volatile goto(lock "incq %0;" \
> + "j" cc " %l[cc_label]" \
> + : : "m" (var) \
> + : "memory" : cc_label); \
> + } else if (add_ID__ == -1) { \
> + asm volatile goto(lock "decq %0;" \
> + "j" cc " %l[cc_label]" \
> + : : "m" (var) \
> + : "memory" : cc_label); \
> + } else { \
> + asm volatile goto(lock "addq %1, %0;" \
> + "j" cc " %l[cc_label]" \
> + : : "m" (var), "er" (val) \
> + : "memory" : cc_label); \
> + } \
> + break; \
> + \
At least in the "asm goto" case you can use:
lock "add%z0 %1,%0;"
... and skip the switch statement.
There was a bug in some old (gcc 3.x?) early x86-64 versions which would
treat %z0 as if it was %Z0 which means it would emit "ll" instead of "q"
but that doesn't apply to any gcc which has "asm goto"...
-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists