lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 21 Aug 2019 08:47:55 -0700
From:   "Paul E. McKenney" <paulmck@...ux.ibm.com>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     linux-kernel@...r.kernel.org,
        Josh Triplett <josh@...htriplett.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        rcu@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC v2] rcu/tree: Try to invoke_rcu_core() if in_irq() during
 unlock

On Wed, Aug 21, 2019 at 11:26:38AM -0400, Joel Fernandes wrote:
> On Wed, Aug 21, 2019 at 10:56:17AM -0400, Joel Fernandes wrote:
> > On Wed, Aug 21, 2019 at 10:38:41AM -0400, Joel Fernandes wrote:
> > > On Mon, Aug 19, 2019 at 08:41:43AM -0700, Paul E. McKenney wrote:
> > > > On Mon, Aug 19, 2019 at 07:33:14AM -0700, Paul E. McKenney wrote:
> > > > > On Mon, Aug 19, 2019 at 05:57:57AM -0700, Paul E. McKenney wrote:
> > > > > > On Sun, Aug 18, 2019 at 07:29:27PM -0700, Paul E. McKenney wrote:
> > > > > > > On Sun, Aug 18, 2019 at 09:46:23PM -0400, Joel Fernandes wrote:
> > > > > > > > On Sun, Aug 18, 2019 at 09:41:43PM -0400, Joel Fernandes wrote:
> > > > > > > > > On Sun, Aug 18, 2019 at 06:21:53PM -0700, Paul E. McKenney wrote:
> > > > > > > > [snip]
> > > > > > > > > > > > Also, your commit log's point #2 is "in_irq() implies in_interrupt()
> > > > > > > > > > > > which implies raising softirq will not do any wake ups."  This mention
> > > > > > > > > > > > of softirq seems a bit odd, given that we are going to wake up a rcuc
> > > > > > > > > > > > kthread.  Of course, this did nothing to quell my suspicions.  ;-)
> > > > > > > > > > > 
> > > > > > > > > > > Yes, I should delete this #2 from the changelog since it is not very relevant
> > > > > > > > > > > (I feel now). My point with #2 was that even if were to raise a softirq
> > > > > > > > > > > (which we are not), a scheduler wakeup of ksoftirqd is impossible in this
> > > > > > > > > > > path anyway since in_irq() implies in_interrupt().
> > > > > > > > > > 
> > > > > > > > > > Please!  Could you also add a first-principles explanation of why
> > > > > > > > > > the added condition is immune from scheduler deadlocks?
> > > > > > > > > 
> > > > > > > > > Sure I can add an example in the change log, however I was thinking of this
> > > > > > > > > example which you mentioned:
> > > > > > > > > https://lore.kernel.org/lkml/20190627173831.GW26519@linux.ibm.com/
> > > > > > > > > 
> > > > > > > > > 	previous_reader()
> > > > > > > > > 	{
> > > > > > > > > 		rcu_read_lock();
> > > > > > > > > 		do_something(); /* Preemption happened here. */
> > > > > > > > > 		local_irq_disable(); /* Cannot be the scheduler! */
> > > > > > > > > 		do_something_else();
> > > > > > > > > 		rcu_read_unlock();  /* Must defer QS, task still queued. */
> > > > > > > > > 		do_some_other_thing();
> > > > > > > > > 		local_irq_enable();
> > > > > > > > > 	}
> > > > > > > > > 
> > > > > > > > > 	current_reader() /* QS from previous_reader() is still deferred. */
> > > > > > > > > 	{
> > > > > > > > > 		local_irq_disable();  /* Might be the scheduler. */
> > > > > > > > > 		do_whatever();
> > > > > > > > > 		rcu_read_lock();
> > > > > > > > > 		do_whatever_else();
> > > > > > > > > 		rcu_read_unlock();  /* Must still defer reporting QS. */
> > > > > > > > > 		do_whatever_comes_to_mind();
> > > > > > > > > 		local_irq_enable();
> > > > > > > > > 	}
> > > > > > > > > 
> > > > > > > > > One modification of the example could be, previous_reader() could also do:
> > > > > > > > > 	previous_reader()
> > > > > > > > > 	{
> > > > > > > > > 		rcu_read_lock();
> > > > > > > > > 		do_something_that_takes_really_long(); /* causes need_qs in
> > > > > > > > > 							  the unlock_special_union to be set */
> > > > > > > > > 		local_irq_disable(); /* Cannot be the scheduler! */
> > > > > > > > > 		do_something_else();
> > > > > > > > > 		rcu_read_unlock();  /* Must defer QS, task still queued. */
> > > > > > > > > 		do_some_other_thing();
> > > > > > > > > 		local_irq_enable();
> > > > > > > > > 	}
> > > > > > > > 
> > > > > > > > The point you were making in that thread being, current_reader() ->
> > > > > > > > rcu_read_unlock() -> rcu_read_unlock_special() would not do any wakeups
> > > > > > > > because previous_reader() sets the deferred_qs bit.
> > > > > > > > 
> > > > > > > > Anyway, I will add all of this into the changelog.
> > > > > > > 
> > > > > > > Examples are good, but what makes it so that there are no examples of
> > > > > > > its being unsafe?
> > > > > > > 
> > > > > > > And a few questions along the way, some quick quiz, some more serious.
> > > > > > > Would it be safe if it checked in_interrupt() instead of in_irq()?
> > > > > > > If not, should the in_interrupt() in the "if" condition preceding the
> > > > > > > added "else if" be changed to in_irq()?  Would it make sense to add an
> > > > > > > "|| !irqs_were_disabled" do your new "else if" condition?  Would the
> > > > > > > body of the "else if" actually be executed in current mainline?
> > > > > > > 
> > > > > > > In an attempt to be at least a little constructive, I am doing some
> > > > > > > testing of this patch overnight, along with a WARN_ON_ONCE() to see if
> > > > > > > that invoke_rcu_core() is ever reached.
> > > > > > 
> > > > > > And that WARN_ON_ONCE() never triggered in two-hour rcutorture runs of
> > > > > > TREE01, TREE02, TREE03, and TREE09.  (These are the TREE variants in
> > > > > > CFLIST that have CONFIG_PREEMPT=y.)
> > > > > > 
> > > > > > This of course raises other questions.  But first, do you see that code
> > > > > > executing in your testing?
> > > > > 
> > > > > Never mind!  Idiot here forgot the "--bootargs rcutree.use_softirq"...
> > > > 
> > > > So this time I ran the test this way:
> > > > 
> > > > tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 8 --duration 10 --configs "TREE01 TREE02 TREE03 TREE09" --bootargs "rcutree.use_softirq=0"
> > > > 
> > > > Still no splats.  Though only 10-minute runs instead of the two-hour runs
> > > > I did last night.  (Got other stuff I need to do, sorry!)
> > > > 
> > > > My test version of your patch is shown below.  Please let me know if I messed
> > > > something up.
> > > 
> > > I think you also need to pass rcutorture.irqreader=1 ?
> > > 
> > > Otherwise seems all readers happen in process context AFAICS.
> > 
> > Which is the default setting for that, so that's not the issue.
> > 
> > I think one reason could be, in_irq() is false when the timer callback
> > executes, since the timer callback is executing after a grace-period. The
> > stack is as follows:
> > 
> > Any reason why we cannot both test for call_rcu() and execute the RCU
> > callback from the timer hardirq handler?
> > 
> > In fact, I guess on use_nosoftirq systems, the callback will not even run
> > in softirq context.
> > 
> > [   20.553361]  => rcu_torture_timer_cb
> > [   20.553361]  => rcu_do_batch
> > [   20.553361]  => rcu_core
> > [   20.553361]  => __do_softirq
> > [   20.553361]  => do_softirq_own_stack
> > [   20.553361]  => do_softirq.part.16
> > [   20.553361]  => __local_bh_enable_ip
> > [   20.553361]  => rcutorture_one_extend
> > [   20.553361]  => rcu_torture_one_read
> > [   20.553361]  => rcu_torture_reader
> > [   20.553361]  => kthread
> > [   20.553361]  => ret_from_fork
> 
> Oops! wrong stack trace, this is the one where it shows that the timer handler
> is running from softirq, not hardirq. Both the rcu_torture_timer() and
> rcu_torture_timer_cb() run from softirq context. ftrace confirms:
> 
> [   27.949671] rcu_tort-182     8..s1 7268705us : <stack trace>
> [   27.949671]  => __ftrace_trace_stack
> [   27.949671]  => rcu_torture_timer
> [   27.949671]  => call_timer_fn
> [   27.949671]  => run_timer_softirq
> [   27.949671]  => __do_softirq
> [   27.949671]  => irq_exit
> [   27.949671]  => smp_apic_timer_interrupt
> [   27.949671]  => apic_timer_interrupt
> [   27.949671]  => rcutorture_one_extend
> [   27.949671]  => rcu_torture_one_read
> [   27.949671]  => rcu_torture_reader
> [   27.949671]  => kthread
> [   27.949671]  => ret_from_fork
> 
> So looks like torture testing modifications are called for, to run them in
> hard interrupt context as well to provide this additional coverage.. Or am I
> way off in the woods?

That actually might be worth doing.

The reason I didn't bother is that in the common case, timer softirq
generates exactly the same race conditions as would a hard interrupt
handler.  You can see this in your stack trace, where the call is
coming from irq_exit(), that is, from the trailing edge of a hardware
interrupt.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ