lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161115081655.GE3142@twins.programming.kicks-ass.net>
Date:   Tue, 15 Nov 2016 09:16:55 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     Josh Triplett <josh@...htriplett.org>,
        linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        tglx@...utronix.de, rostedt@...dmis.org, dhowells@...hat.com,
        edumazet@...gle.com, dvhart@...ux.intel.com, fweisbec@...il.com,
        oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH tip/core/rcu 6/7] rcu: Make expedited grace periods
 recheck dyntick idle state

On Mon, Nov 14, 2016 at 10:12:37AM -0800, Paul E. McKenney wrote:
> On Mon, Nov 14, 2016 at 06:37:33PM +0100, Peter Zijlstra wrote:
> > On Mon, Nov 14, 2016 at 09:25:12AM -0800, Josh Triplett wrote:
> > > On Mon, Nov 14, 2016 at 08:57:12AM -0800, Paul E. McKenney wrote:
> > > > Expedited grace periods check dyntick-idle state, and avoid sending
> > > > IPIs to idle CPUs, including those running guest OSes, and, on NOHZ_FULL
> > > > kernels, nohz_full CPUs.  However, the kernel has been observed checking
> > > > a CPU while it was non-idle, but sending the IPI after it has gone
> > > > idle.  This commit therefore rechecks idle state immediately before
> > > > sending the IPI, refraining from IPIing CPUs that have since gone idle.
> > > > 
> > > > Reported-by: Rik van Riel <riel@...hat.com>
> > > > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > > 
> > > atomic_add_return(0, ...) seems odd.  Do you actually want that, rather
> > > than atomic_read(...)?  If so, can you please document exactly why?
> > 
> > Yes that is weird. The only effective difference is that it would do a
> > load-exclusive instead of a regular load.
> 
> It is weird, and checking to see if it is safe to convert it and its
> friends to something with less overhead is on my list.   This starts
> with a patch series I will post soon that consolidates all these
> atomic_add_return() calls into a single function, which will ease testing
> and other verification.
> 
> All that aside, please keep in mind that much is required from this load.
> It is part of a network of ordered operations that guarantee that any
> operation from any CPU preceding a given grace period is seen to precede
> any other operation from any CPU following that same grace period.
> And each and every CPU must agree on the order of those two operations,
> otherwise, RCU is broken.

OK, so something similar to:

	smp_mb();
	atomic_read();

then? That would order, with global transitivity, against prior
operations.

> In addition, please note also that these operations are nowhere near
> any fastpaths.

My concern is mostly that it reads very weird. I appreciate this not
being fast path code, but confusing code is bad in any form.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ