lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1522947547-24081-1-git-send-email-will.deacon@arm.com>
Date:   Thu,  5 Apr 2018 17:58:57 +0100
From:   Will Deacon <will.deacon@....com>
To:     linux-kernel@...r.kernel.org
Cc:     linux-arm-kernel@...ts.infradead.org, peterz@...radead.org,
        mingo@...nel.org, boqun.feng@...il.com, paulmck@...ux.vnet.ibm.com,
        catalin.marinas@....com, Will Deacon <will.deacon@....com>
Subject: [PATCH 00/10] kernel/locking: qspinlock improvements

Hi all,

I've been kicking the tyres further on qspinlock and with this set of patches
I'm happy with the performance and fairness properties. In particular, the
locking algorithm now guarantees forward progress whereas the implementation
in mainline can starve threads indefinitely in cmpxchg loops.

Catalin has also implemented a model of this using TLA to prove that the
lock is fair, although this doesn't take the memory model into account:

https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/

I'd still like to get more benchmark numbers and wider exposure before
enabling this for arm64, but my current testing is looking very promising.
This series, along with the arm64-specific patches, is available at:

https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/log/?h=qspinlock

Cheers,

Will

--->8

Jason Low (1):
  locking/mcs: Use smp_cond_load_acquire() in mcs spin loop

Will Deacon (9):
  locking/qspinlock: Don't spin on pending->locked transition in
    slowpath
  locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath
  locking/qspinlock: Kill cmpxchg loop when claiming lock from head of
    queue
  locking/qspinlock: Use atomic_cond_read_acquire
  barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed
  locking/qspinlock: Use smp_cond_load_relaxed to wait for next node
  locking/qspinlock: Merge struct __qspinlock into struct qspinlock
  locking/qspinlock: Make queued_spin_unlock use smp_store_release
  locking/qspinlock: Elide back-to-back RELEASE operations with
    smp_wmb()

 arch/x86/include/asm/qspinlock.h          |  19 ++-
 arch/x86/include/asm/qspinlock_paravirt.h |   3 +-
 include/asm-generic/barrier.h             |  27 ++++-
 include/asm-generic/qspinlock.h           |   2 +-
 include/asm-generic/qspinlock_types.h     |  32 ++++-
 include/linux/atomic.h                    |   2 +
 kernel/locking/mcs_spinlock.h             |  10 +-
 kernel/locking/qspinlock.c                | 191 ++++++++++--------------------
 kernel/locking/qspinlock_paravirt.h       |  34 ++----
 9 files changed, 141 insertions(+), 179 deletions(-)

-- 
2.1.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ