lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 03 Jun 2013 11:32:35 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	paulmck@...ux.vnet.ibm.com
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC][PATCH] rcu: Hotplug and PROVE_RCU_DELAY not playing well
 together

On Sun, 2013-06-02 at 07:18 -0700, Paul E. McKenney wrote:
> On Sat, Jun 01, 2013 at 07:54:25PM -0700, Paul E. McKenney wrote:
> > On Fri, May 31, 2013 at 05:27:49PM -0400, Steven Rostedt wrote:
> > > Paul,
> > > 
> > > I've been debugging the last couple of days why my tests have been
> > > locking up. One of my tracing tests, runs all available tracers. The
> > > lockup always happened with the mmiotrace, which is used to trace
> > > interactions between priority drivers and the kernel. But to do this
> > > easily, when the tracer gets registered, it disables all but the boot
> > > CPUs. The lockup always happened after it got done disabling the CPUs.
> > > 
> > > Then I decided to try this:
> > > 
> > > while :; do
> > > 	for i in 1 2 3; do
> > > 		echo 0 > /sys/devices/system/cpu/cpu$i/online
> > > 	done
> > > 	for i in 1 2 3; do
> > > 		echo 1 > /sys/devices/system/cpu/cpu$i/online
> > > 	done
> > > done
> > > 
> > > Well, sure enough, that locked up too, with the same users. Doing a
> > > sysrq-w (showing all blocked tasks):
> > 
> > Impressive debugging!!!  And that is what I call one gnarly deadlock!
> > 
> > Your patch looks like it should fix the problem, but my immediate
> > reaction was that it would be simpler to have rcu_gp_init()
> > do either cpu_maps_update_begin(), get_online_cpus(), or
> > cpu_hotplug_begin() if CONFIG_PROVE_RCU_DELAY instead of the
> > current mutex_lock(&rsp->onoff_mutex).  (My first choice would be
> > get_online_cpus(), but I am not sure that I fully understand the
> > deadlock.)
> > 
> > Or am I missing something about the nature of this deadlock?
> > 
> > One concern is that if I made that change, and if any hotplug notifier
> > waited for a grace period, there would be another deadlock.  Which
> > might well be why this acquires ->onoff_lock.  Hmmm...
> > 
> > OK, another possible simplification would be to use udelay() or something
> > similar to do the waiting, and maybe dial down the delay from the current
> > two jiffies to (say) 200 microseconds.  I could adjust the "if" condition
> > to make the delay more probable to get roughly the same testing intensity
> > as the current code has.
> 
> And here is a patch based on this approach.
> 
> 							Thanx, Paul
> 
> ------------------------------------------------------------------------
> 
> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> index d12470e..9a08bdc 100644
> --- a/kernel/rcutree.c
> +++ b/kernel/rcutree.c
> @@ -1320,9 +1320,9 @@ static int rcu_gp_init(struct rcu_state *rsp)
>  					    rnp->grphi, rnp->qsmask);
>  		raw_spin_unlock_irq(&rnp->lock);
>  #ifdef CONFIG_PROVE_RCU_DELAY
> -		if ((prandom_u32() % (rcu_num_nodes * 8)) == 0 &&
> +		if ((prandom_u32() % (rcu_num_nodes + 1)) == 0 &&
>  		    system_state == SYSTEM_RUNNING)
> -			schedule_timeout_uninterruptible(2);
> +			udelay(200);

Yeah, I thought about just doing a udelay too, but I wanted to see if
the other hack would work first ;-)

I'll give this a test.

-- Steve

>  #endif /* #ifdef CONFIG_PROVE_RCU_DELAY */
>  		cond_resched();
>  	}


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ