lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 5 Dec 2015 00:43:37 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Waiman Long <waiman.long@....com>,
	Will Deacon <will.deacon@....com>,
	Ingo Molnar <mingo@...nel.org>,
	Oleg Nesterov <oleg@...hat.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Boqun Feng <boqun.feng@...il.com>,
	Jonathan Corbet <corbet@....net>,
	Michal Hocko <mhocko@...nel.org>,
	David Howells <dhowells@...hat.com>,
	Paul Turner <pjt@...gle.com>
Subject: Re: [PATCH 3/4] locking: Introduce smp_cond_acquire()

On Fri, Dec 04, 2015 at 02:05:49PM -0800, Linus Torvalds wrote:
> Of course, I suspect we should not use READ_ONCE(), but some
> architecture-overridable version that just defaults to READ_ONCE().
> Same goes for that "smp_rmb()". Because maybe some architectures will
> just prefer an explicit acquire, and I suspect we do *not* want
> architectures having to recreate and override that crazy loop.
> 
> How much does this all actually end up mattering, btw?

Not sure, I'll have to let Will quantify that. But the whole reason
we're having this discussion is that ARM64 has a MONITOR+MWAIT like
construct that they'd like to use to avoid the spinning.

Of course, in order to use that, they _have_ to override the crazy loop.

Now, Will and I spoke earlier today, and the version proposed by me (and
you, since that is roughly similar) will indeed work for them in that it
would allow them to rewrite the thing something like:


	typeof(*ptr) VAL;
	for (;;) {
		VAL = READ_ONCE(*ptr);
		if (expr)
			break;
		cmp_and_wait(ptr, VAL);
	}


Where their cmd_and_wait(ptr, val) looks a little like:

	asm volatile(
		"	ldxr	%w0, %1		\n"
		"	sub	%w0, %w0, %2	\n"
		"	cbnz	1f		\n"
		"	wfe			\n"
		"1:"

		: "=&r" (tmp)
		: "Q" (*ptr), "r" (val)
	);

(excuse my poor ARM asm foo)

Which sets up a load-exclusive monitor, compares if the value loaded
matches what we previously saw, and if so, wait-for-event.

WFE will wake on any event that would've also invalidated a subsequent
stxr or store-exclusive.

ARM64 also of course can choose to use load-acquire instead of the
READ_ONCE(), or still issue the smp_rmb(), dunno what is best for them.
The load-acquire would (potentially) be issued multiple times, vs the
rmb only once.  I'll let Will sort that.


In any case, WFE is both better for power consumption and lowers the
cacheline pressure, ie. nobody keeps trying to pull the line into shared
state all the time while you're trying to get a store done.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ