[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150820103810.GC19328@arm.com>
Date: Thu, 20 Aug 2015 11:38:11 +0100
From: Will Deacon <will.deacon@....com>
To: Sarbojit Ganguly <ganguly.s@...sung.com>
Cc: Catalin Marinas <Catalin.Marinas@....com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
SHARAN ALLUR <sharan.allur@...sung.com>,
VIKRAM MUPPARTHI <vikram.m@...sung.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"Waiman.Long@...com" <Waiman.Long@...com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>
Subject: Re: Re: [PATCH] arm: Adding support for atomic half word exchange
On Thu, Aug 20, 2015 at 07:40:44AM +0100, Sarbojit Ganguly wrote:
> My apologies, the e-mail editor was not configured properly.
> CC'ed to relevant maintainers and reposting once again with proper formatting.
>
> Since 16 bit half word exchange was not there and MCS based qspinlock
> by Waiman's xchg_tail() requires an atomic exchange on a half word,
> here is a small modification to __xchg() code to support the exchange.
> ARMv6 and lower does not have support for LDREXH, so we need to make sure things
> do not break when we're compiling on ARMv6.
>
> Signed-off-by: Sarbojit Ganguly <ganguly.s@...sung.com>
> ---
> arch/arm/include/asm/cmpxchg.h | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
> index 1692a05..547101d 100644
> --- a/arch/arm/include/asm/cmpxchg.h
> +++ b/arch/arm/include/asm/cmpxchg.h
> @@ -50,6 +50,24 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
> : "r" (x), "r" (ptr)
> : "memory", "cc");
> break;
> +#if !defined (CONFIG_CPU_V6)
> + /*
> + * Halfword exclusive exchange
> + * This is new implementation as qspinlock
> + * wants 16 bit atomic CAS.
> + * This is not supported on ARMv6.
> + */
I don't think you need this comment. We don't use qspinlock on arch/arm/.
> + case 2:
> + asm volatile("@ __xchg2\n"
> + "1: ldrexh %0, [%3]\n"
> + " strexh %1, %2, [%3]\n"
> + " teq %1, #0\n"
> + " bne 1b"
> + : "=&r" (ret), "=&r" (tmp)
> + : "r" (x), "r" (ptr)
> + : "memory", "cc");
> + break;
> +#endif
> case 4:
> asm volatile("@ __xchg4\n"
> "1: ldrex %0, [%3]\n"
We have the same issue with the byte exclusives, so I think you need
to extend the guard you're adding to cover that case too (which is a bug
in current mainline).
Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists