lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170116185704.2cpgih6idiegn7k3@x>
Date:   Mon, 16 Jan 2017 10:57:04 -0800
From:   Josh Triplett <josh@...htriplett.org>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
        dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
        fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH tip/core/rcu 1/6] rcu: Abstract the dynticks
 momentary-idle operation

On Mon, Jan 16, 2017 at 03:22:39AM -0800, Paul E. McKenney wrote:
> On Sun, Jan 15, 2017 at 11:39:51PM -0800, Josh Triplett wrote:
> > On Sat, Jan 14, 2017 at 12:54:40AM -0800, Paul E. McKenney wrote:
> > > This commit is the first step towards full abstraction of all accesses to
> > > the ->dynticks counter, implementing the previously open-coded atomic add
> > > of two in a new rcu_dynticks_momentary_idle() function.  This abstraction
> > > will ease changes to the ->dynticks counter operation.
> > > 
> > > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > 
> > This change has an additional effect not documented in the commit
> > message: it eliminates the smp_mb__before_atomic and
> > smp_mb__after_atomic calls.  Can you please document that in the commit
> > message, and explain why that doesn't cause a problem?
> 
> The trick is that the old code used the non-value-returning atomic_add(),
> which does not imply ordering, hence the smp_mb__before_atomic() and
> smp_mb__after_atomic() calls.  The new code uses atomic_add_return(),
> which does return a value, and therefore implies full ordering in and
> of itself.
> 
> How would you like me to proceed?

With the above explanation added to the commit message:

Reviewed-by: Josh Triplett <josh@...htriplett.org>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ