lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 4 Mar 2016 13:05:24 -0800
From:	Darren Hart <dvhart@...radead.org>
To:	Jianyu Zhan <nasa4836@...il.com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>, dave@...olabs.net,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>,
	Rasmus Villemoes <linux@...musvillemoes.dk>,
	dvhart@...ux.intel.com,
	Christian Borntraeger <borntraeger@...ibm.com>,
	Fengguang Wu <fengguang.wu@...el.com>, bigeasy@...utronix.de
Subject: Re: [PATCH] futex: replace bare barrier() with more lightweight
 READ_ONCE()

On Fri, Mar 04, 2016 at 09:12:31AM +0800, Jianyu Zhan wrote:
> On Fri, Mar 4, 2016 at 1:05 AM, Darren Hart <dvhart@...radead.org> wrote:
> > I thought I provided a corrected comment block.... maybe I didn't. We have been
> > working on improving the futex documentation, so we're paying close attention to
> > terminology as well as grammar. This one needs a couple minor tweaks. I suggest:
> >
> > /*
> >  * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and
> >  * optimizing lock_ptr out of the logic below.
> >  */
> >
> > The bit about q->lock_ptr possibly changing is already covered by the large
> > comment block below the spin_lock(lock_ptr) call.
> 
> The large comment block is explaining the why the retry logic is required.
> To achieve this semantic requirement,  the READ_ONCE is needed to prevent
> compiler optimizing it by doing double loads.
> 
> So I think the comment above should explain this tricky part.

Fair point. Consider:


/*
 * q->lock_ptr can change between this read and the following spin_lock.
 * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and
 * optimizing lock_ptr out of the logic below.
 */

> 
> > /* Use READ_ONCE to forbid the compiler from reloading q->lock_ptr  in spin_lock()  */
> 
> And as for preventing from optimizing the lock_ptr out of the retry
> code block,  I have consult
> Paul Mckenney,  he suggests one more  READ_ONCE should be added here:

Let's keep this discussion together so we have a record of the
justification.

+Paul McKenney

Paul, my understanding was that spin_lock was a CPU memory barrier,
which in turn is an implicit compiler barrier (aka barrier()), of which
READ_ONCE is described as a weaker form. Reviewing this, I realize the
scope of barrier() wasn't clear to me. It seems while barrier() ensures
ordering, it does not offer the same guarantee regarding reloading that
READ_ONCE offers. So READ_ONCE is not strictly a weaker form of
barrier() as I had gathered from a spotty reading of
memory-barriers.txt, but it also offers guarantees regarding memory
references that barrier() does not.

Correct?

> 
> if (unlikely(lock_ptr != READ_ONCE(q->lock_ptr))) {
> <------------------------------
>                   spin_unlock(lock_ptr);
>                   goto retry;
>  }
> 
> And I think this are two problem, and should be separated into two patches?

Yes (pending results of the conversation above).

-- 
Darren Hart
Intel Open Source Technology Center

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ