lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 19 Jun 2018 11:45:00 +0200 From: Peter Zijlstra <peterz@...radead.org> To: Thomas Hellstrom <thellstrom@...are.com> Cc: dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org, linux-graphics-maintainer@...are.com, pv-drivers@...are.com, Ingo Molnar <mingo@...hat.com>, Jonathan Corbet <corbet@....net>, Gustavo Padovan <gustavo@...ovan.org>, Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, Sean Paul <seanpaul@...omium.org>, David Airlie <airlied@...ux.ie>, Davidlohr Bueso <dave@...olabs.net>, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>, Josh Triplett <josh@...htriplett.org>, Thomas Gleixner <tglx@...utronix.de>, Kate Stewart <kstewart@...uxfoundation.org>, Philippe Ombredanne <pombredanne@...b.com>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>, linux-doc@...r.kernel.org, linux-media@...r.kernel.org, linaro-mm-sig@...ts.linaro.org Subject: Re: [PATCH v4 2/3] locking: Implement an algorithm choice for Wound-Wait mutexes On Tue, Jun 19, 2018 at 10:24:44AM +0200, Thomas Hellstrom wrote: > The current Wound-Wait mutex algorithm is actually not Wound-Wait but > Wait-Die. Implement also Wound-Wait as a per-ww-class choice. Wound-Wait > is, contrary to Wait-Die a preemptive algorithm and is known to generate > fewer backoffs. Testing reveals that this is true if the > number of simultaneous contending transactions is small. > As the number of simultaneous contending threads increases, Wait-Wound > becomes inferior to Wait-Die in terms of elapsed time. > Possibly due to the larger number of held locks of sleeping transactions. > > Update documentation and callers. > > Timings using git://people.freedesktop.org/~thomash/ww_mutex_test > tag patch-18-06-15 > > Each thread runs 100000 batches of lock / unlock 800 ww mutexes randomly > chosen out of 100000. Four core Intel x86_64: > > Algorithm #threads Rollbacks time > Wound-Wait 4 ~100 ~17s. > Wait-Die 4 ~150000 ~19s. > Wound-Wait 16 ~360000 ~109s. > Wait-Die 16 ~450000 ~82s. > > Cc: Ingo Molnar <mingo@...hat.com> > Cc: Jonathan Corbet <corbet@....net> > Cc: Gustavo Padovan <gustavo@...ovan.org> > Cc: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com> > Cc: Sean Paul <seanpaul@...omium.org> > Cc: David Airlie <airlied@...ux.ie> > Cc: Davidlohr Bueso <dave@...olabs.net> > Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> > Cc: Josh Triplett <josh@...htriplett.org> > Cc: Thomas Gleixner <tglx@...utronix.de> > Cc: Kate Stewart <kstewart@...uxfoundation.org> > Cc: Philippe Ombredanne <pombredanne@...b.com> > Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org> > Cc: linux-doc@...r.kernel.org > Cc: linux-media@...r.kernel.org > Cc: linaro-mm-sig@...ts.linaro.org > Co-authored-by: Peter Zijlstra <peterz@...radead.org> > Signed-off-by: Thomas Hellstrom <thellstrom@...are.com> > > --- > Documentation/locking/ww-mutex-design.txt | 57 +++++++++-- > drivers/dma-buf/reservation.c | 2 +- > drivers/gpu/drm/drm_modeset_lock.c | 2 +- > include/linux/ww_mutex.h | 17 ++- > kernel/locking/locktorture.c | 2 +- > kernel/locking/mutex.c | 165 +++++++++++++++++++++++++++--- > kernel/locking/test-ww_mutex.c | 2 +- > lib/locking-selftest.c | 2 +- > 8 files changed, 213 insertions(+), 36 deletions(-) Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Powered by blists - more mailing lists