lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140110064236.GV10038@linux.vnet.ibm.com>
Date:	Thu, 9 Jan 2014 22:42:36 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Waiman Long <Waiman.Long@...com>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
	linux-arch@...r.kernel.org, x86@...nel.org,
	linux-kernel@...r.kernel.org,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Michel Lespinasse <walken@...gle.com>,
	Andi Kleen <andi@...stfloor.org>,
	Rik van Riel <riel@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
	George Spelvin <linux@...izon.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	"Aswin Chandramouleeswaran\"" <aswin@...com>,
	Scott J Norton <scott.norton@...com>
Subject: Re: [PATCH v8 0/4] qrwlock: Introducing a queue read/write lock
 implementation

On Wed, Jan 08, 2014 at 11:59:32AM -0500, Waiman Long wrote:
> v7->v8:
>  - Use atomic_t functions (which are implemented in all arch's) to
>    modify reader counts.
>  - Use smp_load_acquire() & smp_store_release() for barriers.
>  - Further tuning in slowpath performance.

This version looks good to me.  You now have my Reviewed-by on all
the patches.

							Thanx, Paul

> v6->v7:
>  - Remove support for unfair lock, so only fair qrwlock will be provided.
>  - Move qrwlock.c to the kernel/locking directory.
> 
> v5->v6:
>  - Modify queue_read_can_lock() to avoid false positive result.
>  - Move the two slowpath functions' performance tuning change from
>    patch 4 to patch 1.
>  - Add a new optional patch to use the new smp_store_release() function 
>    if that is merged.
> 
> v4->v5:
>  - Fix wrong definitions for QW_MASK_FAIR & QW_MASK_UNFAIR macros.
>  - Add an optional patch 4 which should only be applied after the
>    mcs_spinlock.h header file is merged.
> 
> v3->v4:
>  - Optimize the fast path with better cold cache behavior and
>    performance.
>  - Removing some testing code.
>  - Make x86 use queue rwlock with no user configuration.
> 
> v2->v3:
>  - Make read lock stealing the default and fair rwlock an option with
>    a different initializer.
>  - In queue_read_lock_slowpath(), check irq_count() and force spinning
>    and lock stealing in interrupt context.
>  - Unify the fair and classic read-side code path, and make write-side
>    to use cmpxchg with 2 different writer states. This slows down the
>    write lock fastpath to make the read side more efficient, but is
>    still slightly faster than a spinlock.
> 
> v1->v2:
>  - Improve lock fastpath performance.
>  - Optionally provide classic read/write lock behavior for backward
>    compatibility.
>  - Use xadd instead of cmpxchg for fair reader code path to make it
>    immute to reader contention.
>  - Run more performance testing.
> 
> As mentioned in the LWN article http://lwn.net/Articles/364583/,
> the read/write lock suffer from an unfairness problem that it is
> possible for a stream of incoming readers to block a waiting writer
> from getting the lock for a long time. Also, a waiting reader/writer
> contending a rwlock in local memory will have a higher chance of
> acquiring the lock than a reader/writer in remote node.
> 
> This patch set introduces a queue-based read/write lock implementation
> that can largely solve this unfairness problem.
> 
> The read lock slowpath will check if the reader is in an interrupt
> context. If so, it will force lock spinning and stealing without
> waiting in a queue. This is to ensure the read lock will be granted
> as soon as possible.
> 
> The queue write lock can also be used as a replacement for ticket
> spinlocks that are highly contended if lock size increase is not
> an issue.
> 
> This patch set currently provides queue read/write lock support on
> x86 architecture only. Support for other architectures can be added
> later on once architecture specific support infrastructure is added
> and proper testing is done.
> 
> The optional patch 3 has a dependency on the the mcs_spinlock.h
> header file which has not been merged yet. So this patch should only
> be applied after the mcs_spinlock.h header file is merged.
> 
> The optional patch 4 use the new smp_store_release() and
> smp_load_acquire() functions which are being reviewed & not merged yet.
> 
> Waiman Long (4):
>   qrwlock: A queue read/write lock implementation
>   qrwlock x86: Enable x86 to use queue read/write lock
>   qrwlock: Use the mcs_spinlock helper functions for MCS queuing
>   qrwlock: Use smp_store_release() in write_unlock()
> 
>  arch/x86/Kconfig                      |    1 +
>  arch/x86/include/asm/spinlock.h       |    2 +
>  arch/x86/include/asm/spinlock_types.h |    4 +
>  include/asm-generic/qrwlock.h         |  203 +++++++++++++++++++++++++++++++++
>  kernel/Kconfig.locks                  |    7 +
>  kernel/locking/Makefile               |    1 +
>  kernel/locking/qrwlock.c              |  191 +++++++++++++++++++++++++++++++
>  7 files changed, 409 insertions(+), 0 deletions(-)
>  create mode 100644 include/asm-generic/qrwlock.h
>  create mode 100644 kernel/locking/qrwlock.c
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ