[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <655cab1a213440f682eddc9cc1ad2d44@AcuMS.aculab.com>
Date: Wed, 10 May 2023 11:35:59 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Uros Bizjak' <ubizjak@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: Thomas Gleixner <tglx@...utronix.de>
Subject: RE: [PATCH] atomics: Use atomic_try_cmpxchg_release in
rcuref_put_slowpath()
From: Uros Bizjak
> Sent: 09 May 2023 16:03
>
> Use atomic_try_cmpxchg instead of atomic_cmpxchg (*ptr, old, new) == old
> in rcuref_put_slowpath(). 86 CMPXCHG instruction returns success in
> ZF flag, so this change saves a compare after cmpxchg. Additionaly,
> the compiler reorders some code blocks to follow likely/unlikely
> annotations in the atomic_try_cmpxchg macro, improving the code from
>
> 9a: f0 0f b1 0b lock cmpxchg %ecx,(%rbx)
> 9e: 83 f8 ff cmp $0xffffffff,%eax
> a1: 74 04 je a7 <rcuref_put_slowpath+0x27>
> a3: 31 c0 xor %eax,%eax
>
> to
>
> 9a: f0 0f b1 0b lock cmpxchg %ecx,(%rbx)
> 9e: 75 4c jne ec <rcuref_put_slowpath+0x6c>
> a0: b0 01 mov $0x1,%al
>
> No functional change intended.
While I'm not against the change I bet you can't detect
any actual difference. IIRC:
- The 'cmp+je' get merged into a single u-op.
- The 'lock cmpxchg' will take long enough that the instruction
decoder won't be a bottleneck.
- Whether the je/jne is predicted taken is pretty much random.
So you'll speculatively execute somewhere (could be anywhere)
while the locked cycle completes.
So the only change is three less bytes of object code.
That will change the cache line alignment of later code.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists