lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 25 Jun 2018 08:43:38 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     Byungchul Park <byungchul.park@....com>, jiangshanlai@...il.com,
        josh@...htriplett.org, mathieu.desnoyers@...icios.com,
        linux-kernel@...r.kernel.org, kernel-team@....com,
        joel@...lfernandes.org
Subject: Re: [PATCH] rcu: Refactor rcu_{nmi,irq}_{enter,exit}()

On Mon, Jun 25, 2018 at 11:02:48AM -0400, Steven Rostedt wrote:
> On Mon, 25 Jun 2018 07:48:49 -0700
> "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> 
> > > > @@ -923,7 +932,7 @@ void rcu_user_exit(void)
> > > >  #endif /* CONFIG_NO_HZ_FULL */
> > > >  
> > > >  /**
> > > > - * rcu_nmi_enter - inform RCU of entry to NMI context
> > > > + * rcu_nmi_enter_common - inform RCU of entry to NMI context
> > > >   *
> > > >   * If the CPU was idle from RCU's viewpoint, update rdtp->dynticks and
> > > >   * rdtp->dynticks_nmi_nesting to let the RCU grace-period handling know
> > > > @@ -931,10 +940,10 @@ void rcu_user_exit(void)
> > > >   * long as the nesting level does not overflow an int.  (You will probably
> > > >   * run out of stack space first.)
> > > >   *
> > > > - * If you add or remove a call to rcu_nmi_enter(), be sure to test
> > > > + * If you add or remove a call to rcu_nmi_enter_common(), be sure to test
> > > >   * with CONFIG_RCU_EQS_DEBUG=y.
> > > >   */
> > > > -void rcu_nmi_enter(void)
> > > > +static __always_inline void rcu_nmi_enter_common(bool irq)
> > > >  {
> > > >  	struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
> > > >  	long incby = 2;
> > > > @@ -951,7 +960,15 @@ void rcu_nmi_enter(void)
> > > >  	 * period (observation due to Andy Lutomirski).
> > > >  	 */
> > > >  	if (rcu_dynticks_curr_cpu_in_eqs()) {
> > > > +
> > > > +		if (irq)
> > > > +			rcu_dynticks_task_exit();
> > > > +
> > > >  		rcu_dynticks_eqs_exit();
> > > > +
> > > > +		if (irq)
> > > > +			rcu_cleanup_after_idle();
> > > > +
> > > >  		incby = 1;
> > > >  	}
> > > >  	trace_rcu_dyntick(incby == 1 ? TPS("Endirq") : TPS("++="),  
> > > 
> > > 
> > > There is a slight change here, although I don't think it is an issue,
> > > but I want to bring it up just in case.
> > > 
> > > The old way had:
> > > 
> > > 	rcu_dynticks_task_exit();
> > > 	rcu_dynticks_eqs_exit();
> > > 	trace_rcu_dyntick();
> > > 	rcu_cleanup_after_idle();
> > > 
> > > The new way has:
> > > 
> > > 	rcu_dynticks_task_exit();
> > > 	rcu_dynticks_eqs_exit();
> > > 	rcu_cleanup_after_idle();
> > > 	trace_rcu_dyntick();
> > > 
> > > As that tracepoint will use RCU, will this cause any side effects?
> > > 
> > > My thought is that the new way is actually more correct, as I'm not
> > > sure we wanted RCU usage before the rcu_cleanup_after_idle().  
> > 
> > I believe that this is OK because is is the position of the call to
> > rcu_dynticks_eqs_exit() that really matters.  Before this call, RCU
> > is not yet watching, and after this call it is watching.  Reversing
> > the calls to rcu_cleanup_after_idle() and trace_rcu_dyntick() has them
> > both being invoked while RCU is watching.
> > 
> > All that rcu_cleanup_after_idle() does is to account for the time that
> > passed while the CPU was idle, for example, advancing callbacks to allow
> > for how ever many RCU grace periods completed during that idle period.
> > 
> > Or am I missing something subtle.
> 
> As I stated above, I actually think the new way is more correct. That's
> because the trace event is the first user of RCU here and it probably
> wont be the last. It makes more sense to do it after the call to
> rcu_cleanup_after_idle(), just because it keeps all the RCU users after
> the RCU internal code for coming out of idle. Sure,
> rcu_cleanup_after_idle() doesn't do anything now that could affect
> this, but maybe it will in the future?

If rcu_cleanup_after_idle() job changes, then yes, changes might be
needed here and perhaps elsewhere as well.  ;-)

> > (At the very least, you would be quite right to ask that this be added
> > to the commit log!)
> 
> Yes, I agree. There should be a comment in the change log about this
> simply because this is technically a functional change.

Very good, will do!

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ