lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 29 Aug 2019 19:45:56 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     linux-kernel@...r.kernel.org,
        Frederic Weisbecker <fweisbec@...il.com>,
        Jonathan Corbet <corbet@....net>,
        Josh Triplett <josh@...htriplett.org>, kernel-team@...roid.com,
        Lai Jiangshan <jiangshanlai@...il.com>,
        linux-doc@...r.kernel.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
        rcu@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC v1 2/2] rcu/tree: Remove dynticks_nmi_nesting counter

On Thu, Aug 29, 2019 at 09:20:36PM -0400, Joel Fernandes wrote:
> On Thu, Aug 29, 2019 at 05:47:56PM -0700, Paul E. McKenney wrote:
> [snip]
> > > > > Paul, also what what happens in the following scenario:
> > > > > 
> > > > > CPU0                                                 CPU1
> > > > > 
> > > > > A syscall causes rcu_eqs_exit()
> > > > > rcu_read_lock();
> > > > >                                                      ---> FQS loop waiting on
> > > > > 						           dyntick_snap
> > > > > usermode-upcall  entry -->causes rcu_eqs_enter();
> > > > > 
> > > > > usermode-upcall  exit  -->causes rcu_eqs_exit();
> > > > > 
> > > > >                                                      ---> FQS loop sees
> > > > > 						          dyntick snap
> > > > > 							  increment and
> > > > > 							  declares CPU0 is
> > > > > 							  in a QS state
> > > > > 							  before the
> > > > > 							  rcu_read_unlock!
> > > > > 
> > > > > rcu_read_unlock();
> > > > > ---
> > > > > 
> > > > > Does the context tracking not call rcu_user_enter() in this case, or did I
> > > > > really miss something?
> > > > 
> > > > Holding rcu_read_lock() across usermode execution (in this case,
> > > > the usermode upcall) is a bad idea.  Why is CPU 0 doing that?
> > > 
> > > Oh, ok. I was just hypothesizing that since usermode upcalls from
> > > something as heavy as interrupts, it could also mean we had the same from
> > > some path that held an rcu_read_lock() as well. It was just a theoretical
> > > concern, if it is not an issue, no problem.
> > 
> > Are there the usual lockdep checks in the upcall code?  Holding a spinlock
> > across them would seem to be at least as bad as holding an rcu_read_lock()
> > across them.
> 
> Great point, I'll take a look.
> 
> > > The other question I had was, in which cases would dyntick_nesting in current
> > > RCU code be > 1 (after removing the lower bit and any crowbarring) ? In the
> > > scenarios I worked out on paper, I can only see this as 1 or 0. But the
> > > wording of it is 'dynticks_nesting'. May be I am missing a nesting scenario?
> > > We can exit RCU-idleness into process context only once (either exiting idle
> > > mode or user mode). Both cases would imply a value of 1.
> > 
> > Interrrupt -> NMI -> certain types of tracing.  I believe that can get
> > it to 5.  There might be even more elaborate sequences of events.
> 
> I am only talking about dynticks_nesting, not dynticks_nmi_nesting. In
> current mainline, I see this only 0 or 1. I am running the below patch
> overnight on all RCU configurations to see if it is ever any other value.

Ah!  Then yes, we never enter non-idle/non-user process-level mode
twice without having exited it.  There would have been a splat,
I believe.

> And, please feel free to ignore my emails as you mentioned you are supposed
> to be out next 2 days! Thanks for the replies though!

Actually this day and next.  ;-)

							Thanx, Paul

> ---8<-----------------------
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 68ebf0eb64c8..8c8ddb6457d5 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -571,6 +571,9 @@ static void rcu_eqs_enter(bool user)
>  	WRITE_ONCE(rdp->dynticks_nmi_nesting, 0);
>  	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) &&
>  		     rdp->dynticks_nesting == 0);
> +
> +	WARN_ON_ONCE(rdp->dynticks_nesting != 1);
> +
>  	if (rdp->dynticks_nesting != 1) {
>  		rdp->dynticks_nesting--;
>  		return;
> @@ -736,6 +739,9 @@ static void rcu_eqs_exit(bool user)
>  	lockdep_assert_irqs_disabled();
>  	rdp = this_cpu_ptr(&rcu_data);
>  	oldval = rdp->dynticks_nesting;
> +
> +	WARN_ON_ONCE(rdp->dynticks_nesting != 0);
> +
>  	WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0);
>  	if (oldval) {
>  		rdp->dynticks_nesting++;
> -- 
> 2.23.0.187.g17f5b7556c-goog
> 

Powered by blists - more mailing lists