lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 4 Mar 2016 09:12:31 +0800
From:	Jianyu Zhan <nasa4836@...il.com>
To:	Darren Hart <dvhart@...radead.org>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>, dave@...olabs.net,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>,
	Rasmus Villemoes <linux@...musvillemoes.dk>,
	dvhart@...ux.intel.com,
	Christian Borntraeger <borntraeger@...ibm.com>,
	Fengguang Wu <fengguang.wu@...el.com>, bigeasy@...utronix.de
Subject: Re: [PATCH] futex: replace bare barrier() with more lightweight READ_ONCE()

On Fri, Mar 4, 2016 at 1:05 AM, Darren Hart <dvhart@...radead.org> wrote:
> I thought I provided a corrected comment block.... maybe I didn't. We have been
> working on improving the futex documentation, so we're paying close attention to
> terminology as well as grammar. This one needs a couple minor tweaks. I suggest:
>
> /*
>  * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and
>  * optimizing lock_ptr out of the logic below.
>  */
>
> The bit about q->lock_ptr possibly changing is already covered by the large
> comment block below the spin_lock(lock_ptr) call.

The large comment block is explaining the why the retry logic is required.
To achieve this semantic requirement,  the READ_ONCE is needed to prevent
compiler optimizing it by doing double loads.

So I think the comment above should explain this tricky part.

> /* Use READ_ONCE to forbid the compiler from reloading q->lock_ptr  in spin_lock()  */

And as for preventing from optimizing the lock_ptr out of the retry
code block,  I have consult
Paul Mckenney,  he suggests one more  READ_ONCE should be added here:

if (unlikely(lock_ptr != READ_ONCE(q->lock_ptr))) {
<------------------------------
                  spin_unlock(lock_ptr);
                  goto retry;
 }

And I think this are two problem, and should be separated into two patches?



Regards,
Jianyu Zhan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ