[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180406132249.GA7071@andrea>
Date: Fri, 6 Apr 2018 15:22:49 +0200
From: Andrea Parri <andrea.parri@...rulasolutions.com>
To: Will Deacon <will.deacon@....com>
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
peterz@...radead.org, mingo@...nel.org, boqun.feng@...il.com,
paulmck@...ux.vnet.ibm.com, catalin.marinas@....com
Subject: Re: [PATCH 00/10] kernel/locking: qspinlock improvements
On Thu, Apr 05, 2018 at 05:58:57PM +0100, Will Deacon wrote:
> Hi all,
>
> I've been kicking the tyres further on qspinlock and with this set of patches
> I'm happy with the performance and fairness properties. In particular, the
> locking algorithm now guarantees forward progress whereas the implementation
> in mainline can starve threads indefinitely in cmpxchg loops.
>
> Catalin has also implemented a model of this using TLA to prove that the
> lock is fair, although this doesn't take the memory model into account:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/
Nice! I'll dig into this formalization, but my guess is that our model
(and axiomatic models "a-la-herd", in general) are not well-suited when
it comes to study properties such as fairness, liveness...
Did you already think about this?
Andrea
>
> I'd still like to get more benchmark numbers and wider exposure before
> enabling this for arm64, but my current testing is looking very promising.
> This series, along with the arm64-specific patches, is available at:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/log/?h=qspinlock
>
> Cheers,
>
> Will
>
> --->8
>
> Jason Low (1):
> locking/mcs: Use smp_cond_load_acquire() in mcs spin loop
>
> Will Deacon (9):
> locking/qspinlock: Don't spin on pending->locked transition in
> slowpath
> locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath
> locking/qspinlock: Kill cmpxchg loop when claiming lock from head of
> queue
> locking/qspinlock: Use atomic_cond_read_acquire
> barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed
> locking/qspinlock: Use smp_cond_load_relaxed to wait for next node
> locking/qspinlock: Merge struct __qspinlock into struct qspinlock
> locking/qspinlock: Make queued_spin_unlock use smp_store_release
> locking/qspinlock: Elide back-to-back RELEASE operations with
> smp_wmb()
>
> arch/x86/include/asm/qspinlock.h | 19 ++-
> arch/x86/include/asm/qspinlock_paravirt.h | 3 +-
> include/asm-generic/barrier.h | 27 ++++-
> include/asm-generic/qspinlock.h | 2 +-
> include/asm-generic/qspinlock_types.h | 32 ++++-
> include/linux/atomic.h | 2 +
> kernel/locking/mcs_spinlock.h | 10 +-
> kernel/locking/qspinlock.c | 191 ++++++++++--------------------
> kernel/locking/qspinlock_paravirt.h | 34 ++----
> 9 files changed, 141 insertions(+), 179 deletions(-)
>
> --
> 2.1.4
>
Powered by blists - more mailing lists