[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150803112930.GI25159@twins.programming.kicks-ass.net>
Date: Mon, 3 Aug 2015 13:29:30 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vineet Gupta <Vineet.Gupta1@...opsys.com>
Cc: lkml <linux-kernel@...r.kernel.org>, arc-linux-dev@...opsys.com
Subject: Re: [PATCH 3/6] ARC: LLOCK/SCOND based spin_lock
On Mon, Aug 03, 2015 at 03:33:05PM +0530, Vineet Gupta wrote:
> EX causes the cache line to be in Exclusive state and if done
> concurrently by multiple cores, it keeps bouncing around.
> In LLOCK/SCOND regime, spinning only involves LLOCK which doesn't
> change the line state hence better solution.
Maybe write like:
"The EXchange instruction forces the cacheline into exclusive state
(because of the modify) and concurrent loops with it will bounce the
cacheline between the cores
Instead use LLOCK/SCOND to form the test-and-set lock implementation
since LLOCK can keep the line in shared state."
Because it wasn't clear to me what EX meant, and surely a LOAD must
change the cacheline state of it previously was in exclusive on another
core. Its just that shared is a whole lot better to spin on than
exclusive.
Also, since you're using LL/SC now, a slightly more complex lock is
trivial to implement, might I suggest you look at implementing ticket
locks next?
> Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
> Signed-off-by: Vineet Gupta <vgupta@...opsys.com>
> ---
> arch/arc/include/asm/spinlock.h | 76 +++++++++++++++++++++++++++++++++++++----
> 1 file changed, 69 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arc/include/asm/spinlock.h b/arch/arc/include/asm/spinlock.h
> index e1651df6a93d..4f6c90a0a68a 100644
> --- a/arch/arc/include/asm/spinlock.h
> +++ b/arch/arc/include/asm/spinlock.h
> @@ -18,9 +18,68 @@
> #define arch_spin_unlock_wait(x) \
> do { while (arch_spin_is_locked(x)) cpu_relax(); } while (0)
>
> +#ifdef CONFIG_ARC_HAS_LLSC
> +
> +static inline void arch_spin_lock(arch_spinlock_t *lock)
> +{
> + unsigned int val;
> +
> + smp_mb();
I'm still puzzled by your need of this one ...
> +
> + __asm__ __volatile__(
> + "1: llock %[val], [%[slock]] \n"
> + " breq %[val], %[LOCKED], 1b \n" /* spin while LOCKED */
> + " scond %[LOCKED], [%[slock]] \n" /* acquire */
> + " bnz 1b \n"
> + " \n"
> + : [val] "=&r" (val)
> + : [slock] "r" (&(lock->slock)),
> + [LOCKED] "r" (__ARCH_SPIN_LOCK_LOCKED__)
> + : "memory", "cc");
> +
> + smp_mb();
> +}
> +
> +/* 1 - lock taken successfully */
> +static inline int arch_spin_trylock(arch_spinlock_t *lock)
> +{
> + unsigned int val, got_it = 0;
> +
> + smp_mb();
Idem.
> +
> + __asm__ __volatile__(
> + "1: llock %[val], [%[slock]] \n"
> + " breq %[val], %[LOCKED], 4f \n" /* already LOCKED, just bail */
> + " scond %[LOCKED], [%[slock]] \n" /* acquire */
> + " bnz 1b \n"
> + " mov %[got_it], 1 \n"
> + "4: \n"
> + " \n"
> + : [val] "=&r" (val),
> + [got_it] "+&r" (got_it)
> + : [slock] "r" (&(lock->slock)),
> + [LOCKED] "r" (__ARCH_SPIN_LOCK_LOCKED__)
> + : "memory", "cc");
> +
> + smp_mb();
> +
> + return got_it;
> +}
> +
> +static inline void arch_spin_unlock(arch_spinlock_t *lock)
> +{
> + smp_mb();
> +
> + lock->slock = __ARCH_SPIN_LOCK_UNLOCKED__;
> +
> + smp_mb();
Idem.
> +}
> +
> +#else /* !CONFIG_ARC_HAS_LLSC */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists