[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151120100935.GB17308@twins.programming.kicks-ass.net>
Date: Fri, 20 Nov 2015 11:09:35 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Will Deacon <will.deacon@....com>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Boqun Feng <boqun.feng@...il.com>,
Oleg Nesterov <oleg@...hat.com>,
Ingo Molnar <mingo@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Jonathan Corbet <corbet@....net>,
Michal Hocko <mhocko@...nel.org>,
David Howells <dhowells@...hat.com>,
Michael Ellerman <mpe@...erman.id.au>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>
Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire()
On Thu, Nov 19, 2015 at 06:01:52PM +0000, Will Deacon wrote:
> For completeness, here's what I've currently got. I've failed to measure
> any performance impact on my 8-core systems, but that's not surprising.
> +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> +{
> + unsigned int tmp;
> + arch_spinlock_t lockval;
>
> + asm volatile(
> +" sevl\n"
> +"1: wfe\n"
Using WFE here would lower the cacheline bouncing pressure a bit I
imagine. Sure we still pull it over into S(hared) after every store
but we don't keep banging on it making the initial e(X)clusive grab
hard.
> +"2: ldaxr %w0, %2\n"
> +" eor %w1, %w0, %w0, ror #16\n"
> +" cbnz %w1, 1b\n"
> + ARM64_LSE_ATOMIC_INSN(
> + /* LL/SC */
> +" stxr %w1, %w0, %2\n"
> + /* Serialise against any concurrent lockers */
> +" cbnz %w1, 2b\n",
> + /* LSE atomics */
> +" nop\n"
> +" nop\n")
I find these ARM64_LSE macro thingies aren't always easy to read, its
fairly easy to overlook the ',' separating the v8 and v8.1 parts, esp.
if you have further interleaving comments like in the above.
> + : "=&r" (lockval), "=&r" (tmp), "+Q" (*lock)
> + :
> + : "memory");
> +}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists