[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrXqLWkJhoUnD+ERrYabvZu1=DbQ1CidYpAn1Ewwrg1FcA@mail.gmail.com>
Date: Fri, 24 Mar 2017 11:45:46 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Dmitry Vyukov <dvyukov@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Brian Gerst <brgerst@...il.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: locking/atomic: Introduce atomic_try_cmpxchg()
On Fri, Mar 24, 2017 at 10:23 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Fri, Mar 24, 2017 at 09:54:46AM -0700, Andy Lutomirski wrote:
>> > So the first snipped I tested regressed like so:
>> >
>> >
>> > 0000000000000000 <T_refcount_inc>: 0000000000000000 <T_refcount_inc>:
>> > 0: 8b 07 mov (%rdi),%eax 0: 8b 17 mov (%rdi),%edx
>> > 2: 83 f8 ff cmp $0xffffffff,%eax 2: 83 fa ff cmp $0xffffffff,%edx
>> > 5: 74 13 je 1a <T_refcount_inc+0x1a> 5: 74 1a je 21 <T_refcount_inc+0x21>
>> > 7: 85 c0 test %eax,%eax 7: 85 d2 test %edx,%edx
>> > 9: 74 0d je 18 <T_refcount_inc+0x18> 9: 74 13 je 1e <T_refcount_inc+0x1e>
>> > b: 8d 50 01 lea 0x1(%rax),%edx b: 8d 4a 01 lea 0x1(%rdx),%ecx
>> > e: f0 0f b1 17 lock cmpxchg %edx,(%rdi) e: 89 d0 mov %edx,%eax
>> > 12: 75 ee jne 2 <T_refcount_inc+0x2> 10: f0 0f b1 0f lock cmpxchg %ecx,(%rdi)
>> > 14: ff c2 inc %edx 14: 74 04 je 1a <T_refcount_inc+0x1a>
>> > 16: 75 02 jne 1a <T_refcount_inc+0x1a> 16: 89 c2 mov %eax,%edx
>> > 18: 0f 0b ud2 18: eb e8 jmp 2 <T_refcount_inc+0x2>
>> > 1a: c3 retq 1a: ff c1 inc %ecx
>> > 1c: 75 03 jne 21 <T_refcount_inc+0x21>
>> > 1e: 0f 0b ud2
>> > 20: c3 retq
>> > 21: c3 retq
>>
>> Can you re-send the better asm you got earlier?
>
> On the left?
Apparently I'm just blind this morning.
*/
After playing with it a bit, I found some of the problem: you're
passing val into EXCEPTION_VALUE, which keeps it live. If I get rid
of that, the generated code is great.
I haven't found a way to convince GCC that, in the success case, eax
isn't clobbered. I wrote this:
static inline bool try_cmpxchg(unsigned int *ptr, unsigned int *val,
unsigned int new)
{
unsigned int old = *val;
bool success;
asm volatile("lock cmpxchgl %[new], %[ptr]"
: "=@ccz" (success),
[ptr] "+m" (*ptr),
[old] "+a" (old)
: [new] "r" (new)
: "memory");
if (!success) {
*val = old;
} else {
if (*val != old) {
*val = old;
__builtin_unreachable();
} else {
/*
* Damnit, GCC, I want you to realize that this
* is happening but to avoid emitting the store.
*/
*val = old; /* <-- here */
}
}
return success;
}
The "here" line is the problematic code that breaks certain use cases,
and it obviously needn't have any effect in the generated code, but
I'm having trouble getting GCC to generate good code without it.
Is there some hack like if __builtin_is_unescaped(*val) *val = old;
that would work?
--Andy
Powered by blists - more mailing lists