lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170413173951.GM3956@linux.vnet.ibm.com>
Date:   Thu, 13 Apr 2017 10:39:51 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
        dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
        oleg@...hat.com, bobby.prani@...il.com, dvyukov@...gle.com,
        will.deacon@....com
Subject: Re: [PATCH tip/core/rcu 07/13] rcu: Add smp_mb__after_atomic() to
 sync_exp_work_done()

On Thu, Apr 13, 2017 at 07:10:27PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 13, 2017 at 09:57:55AM -0700, Paul E. McKenney wrote:
> > On Thu, Apr 13, 2017 at 06:24:09PM +0200, Peter Zijlstra wrote:
> > > On Thu, Apr 13, 2017 at 09:10:42AM -0700, Paul E. McKenney wrote:
> > > > On Thu, Apr 13, 2017 at 11:18:32AM +0200, Peter Zijlstra wrote:
> > > > > On Wed, Apr 12, 2017 at 09:55:43AM -0700, Paul E. McKenney wrote:
> > > > > > However, a little future-proofing is a good thing,
> > > > > > especially given that smp_mb__before_atomic() is only required to
> > > > > > provide acquire semantics rather than full ordering.  This commit
> > > > > > therefore adds smp_mb__after_atomic() after the atomic_long_inc()
> > > > > > in sync_exp_work_done().
> > > > > 
> > > > > Oh!? As far as I'm away the smp_mb__{before,after}_atomic() really must
> > > > > provide full MB, no confusion about that.
> > > > > 
> > > > > We have other primitives for acquire/release.
> > > > 
> > > > Hmmm...  Rechecking atomic_ops.txt, it does appear that you are quite
> > > > correct.  Adding Will and Dmitry on CC, but dropping this patch for now.
> > > 
> > > I'm afraid that document is woefully out dated. I'm surprised it says
> > > anything on the subject.
> > 
> > And there is some difference of opinion.  Some believe that the
> > smp_mb__before_atomic() only guarantees acquire and smp_mb__after_atomic()
> > only guarantees release, but all current architectures provide full
> > ordering, as you noted and as stated in atomic_ops.txt.
> 
> Which 'some' think it only provides acquire/release ?
> 
> I made very sure -- when I renamed/audited/wrote all this -- that they
> indeed do a full memory barrier.
> 
> > How do we decide?
> 
> I say its a full mb, always was.
> 
> People used it to create acquire/release _like_ constructs, because we
> simply didn't have anything else.
> 
> Also, I think Linus once opined that acquire/release is part of a
> store/load (hence smp_store_release/smp_load_acquire) and not a barrier.
> 
> > Once we do decide, atomic_ops.txt of course needs to be updated accordingly.
> 
> There was so much missing there that I didn't quite know where to start.

Well, if there are no objections, I will fix up the smp_mb__before_atomic()
and smp_mb__after_atomic() pieces.

I suppose that one alternative is the new variant of kerneldoc, though
very few of these functions have comment headers, let alone kerneldoc
headers.  Which reminds me, the question of spin_unlock_wait() and
spin_is_locked() semantics came up a bit ago.  Here is what I believe
to be the case.  Does this match others' expectations?

o	spin_unlock_wait() semantics:

	1.	Any access in any critical section prior to the
		spin_unlock_wait() is visible to all code following
		(in program order) the spin_unlock_wait().

	2.	Any access prior (in program order) to the
		spin_unlock_wait() is visible to any critical
		section following the spin_unlock_wait().

o	spin_is_locked() semantics: Half of spin_unlock_wait(),
	but only if it returns false:

	1.	Any access in any critical section prior to the
		spin_unlock_wait() is visible to all code following
		(in program order) the spin_unlock_wait().

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ