lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130718074204.GA22623@gmail.com>
Date:	Thu, 18 Jul 2013 09:42:04 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Waiman Long <waiman.long@...com>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
	linux-arch@...r.kernel.org, x86@...nel.org,
	linux-kernel@...r.kernel.org,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Richard Weinberger <richard@....at>,
	Catalin Marinas <catalin.marinas@....com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Matt Fleming <matt.fleming@...el.com>,
	Herbert Xu <herbert@...dor.apana.org.au>,
	Akinobu Mita <akinobu.mita@...il.com>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Michel Lespinasse <walken@...gle.com>,
	Andi Kleen <andi@...stfloor.org>,
	Rik van Riel <riel@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Chandramouleeswaran, Aswin" <aswin@...com>,
	"Norton, Scott J" <scott.norton@...com>
Subject: Re: [PATCH RFC 1/2] qrwlock: A queue read/write lock implementation


* Waiman Long <waiman.long@...com> wrote:

> >
> >>+ *    stealing the lock if come at the right moment, the granting of the
> >>+ *    lock is mostly in FIFO order.
> >>+ * 2. It is faster in high contention situation.
> >
> > Again, why is it faster?
> 
> The current rwlock implementation suffers from a thundering herd 
> problem. When many readers are waiting for the lock hold by a writer, 
> they will all jump in more or less at the same time when the writer 
> releases the lock. That is not the case with qrwlock. It has been shown 
> in many cases that avoiding this thundering herd problem can lead to 
> better performance.

Btw., it's possible to further optimize this "writer releases the lock to 
multiple readers spinning" thundering herd scenario in the classic 
read_lock() case, without changing the queueing model.

Right now read_lock() fast path is a single atomic instruction. When a 
writer releases the lock then it makes it available to all readers and 
each reader will execute a LOCK DEC instruction which will succeed.

This is the relevant code in arch/x86/lib/rwlock.S [edited for 
readability]:

__read_lock_failed():

0:      LOCK_PREFIX
        READ_LOCK_SIZE(inc) (%__lock_ptr)

1:      rep; nop
        READ_LOCK_SIZE(cmp) $1, (%__lock_ptr)
        js      1b

        LOCK_PREFIX READ_LOCK_SIZE(dec) (%__lock_ptr)
        js      0b

        ret

This is where we could optimize: instead of signalling to each reader that 
it's fine to decrease the count and letting dozens of readers do that on 
the same cache-line, which ping-pongs around the numa cross-connect 
touching every other CPU as they execute the LOCK DEC instruction, we 
could let the _writer_ modify the count on unlock in essence, to the exact 
value that readers expect.

Since read_lock() can never abort this should be relatively 
straightforward: the INC above could be left out, and the writer side 
needs to detect that there are no other writers waiting and can set the 
count to 'reader locked' value - which the readers will detect without 
modifying the cache line:

__read_lock_failed():

0:      rep; nop
        READ_LOCK_SIZE(cmp) $1, (%__lock_ptr)
        js      0b

        ret

(Unless I'm missing something that is.)

That way the current write_unlock() followed by a 'thundering herd' of 
__read_lock_failed() atomic accesses is transformed into an efficient 
read-only broadcast of information with only a single update to the 
cacheline: the writer-updated cacheline propagates in parallel to every 
CPU and is cached there.

On typical hardware this will be broadcast to all CPUs as part of regular 
MESI invalidation bus traffic.

reader unlock will still have to modify the cacheline, so rwlocks will 
still have a fundamental scalability limit even in the read-only usecase.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ