[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150227201526.GH24818@arm.com>
Date: Fri, 27 Feb 2015 20:15:30 +0000
From: Will Deacon <will.deacon@....com>
To: Pranith Kumar <bobby.prani@...il.com>
Cc: Catalin Marinas <Catalin.Marinas@....com>,
Steve Capper <steve.capper@...aro.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] arm64: cmpxchg.h: Bring ldxr and stxr closer
On Fri, Feb 27, 2015 at 08:09:17PM +0000, Pranith Kumar wrote:
> ARM64 documentation recommends keeping exclusive loads and stores as close as
> possible. Any instructions which do not depend on the value loaded should be
> moved outside.
>
> In the current implementation of cmpxchg(), there is a mov instruction which can
> be pulled before the load exclusive instruction without any change in
> functionality. This patch does that change.
>
> Signed-off-by: Pranith Kumar <bobby.prani@...il.com>
> ---
> arch/arm64/include/asm/cmpxchg.h | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
[...]
> @@ -166,11 +166,11 @@ static inline int __cmpxchg_double(volatile void *ptr1, volatile void *ptr2,
> VM_BUG_ON((unsigned long *)ptr2 - (unsigned long *)ptr1 != 1);
> do {
> asm volatile("// __cmpxchg_double8\n"
> + " mov %w0, #0\n"
> " ldxp %0, %1, %2\n"
Seriously, you might want to test this before you mindlessly make changes to
low-level synchronisation code. Not only is the change completely unnecessary
but it is actively harmful.
Have a good weekend,
Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists