lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20151207200013.GE28602@linux.vnet.ibm.com>
Date:	Mon, 7 Dec 2015 12:00:13 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	tglx@...utronix.de, peterz@...radead.org,
	preeti@...ux.vnet.ibm.com, viresh.kumar@...aro.org,
	mtosatti@...hat.com, linux-kernel@...r.kernel.org,
	sasha.levin@...cle.com
Subject: Re: Possible issue with commit 4961b6e11825?

On Mon, Dec 07, 2015 at 08:01:23PM +0100, Frederic Weisbecker wrote:
> On Fri, Dec 04, 2015 at 03:20:22PM -0800, Paul E. McKenney wrote:
> > Hello!
> > 
> > Are there any known issues with commit 4961b6e11825 (sched: core: Use
> > hrtimer_start[_expires]())?
> > 
> > The reason that I ask is that I am about 90% sure that an rcutorture
> > failure bisects to that commit.  I will be running more tests on
> > 3497d206c4d9 (perf: core: Use hrtimer_start()), which is the predecessor
> > of 4961b6e11825, and which, unlike 4961b6e11825, passes a 12-hour
> > rcutorture test with scenario TREE03.  In contrast, 4961b6e11825 gets
> > 131 RCU CPU stall warnings, 132 reports of one of RCU's grace-period
> > kthreads being starved, and 525 reports of one of rcutorture's kthreads
> > being starved.  Most of the test runs hang on shutdown, which is no
> > surprise if an RCU CPU stall is happening at about that time.
> 
> I have no idea what the issue is but maybe you have the RCU stall backtrace
> somewhere?

First, please note that this commit might well be an innocent bystander.

That said, I have lots and lots of them!  ;-)

They look like this:

[ 4135.979013] Call Trace:
[ 4135.979013]  [<ffffffff81336c77>] ? debug_smp_processor_id+0x17/0x20
[ 4135.979013]  [<ffffffff8100dffc>] ? default_idle+0xc/0xe0
[ 4135.979013]  [<ffffffff8100e76a>] ? arch_cpu_idle+0xa/0x10
[ 4135.979013]  [<ffffffff8108b0d7>] ? default_idle_call+0x27/0x30
[ 4135.979013]  [<ffffffff8108b3c4>] ? cpu_startup_entry+0x294/0x310
[ 4135.979013]  [<ffffffff81037aef>] ? start_secondary+0xef/0x100

Which says that they are in the idle loop, so the RCU grace-period
kthread should notice within six jiffies and post a quiescent state on
their behalf.  But the next line is like this:

[ 4135.979013] rcu_preempt kthread starved for 21024 jiffies! g102259 c102258 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1

Which says that RCU's grace-period kthread has not run for the full
duration of the stall (21 seconds), hence its failure to record other
tasks' quiescent states.  Its state is 0x1, which is TASK_INTERRUPTIBLE.
The last thing it did was wait either for all quiescent states to come in,
or for three jiffies to elapse (this is the "RCU_GP_WAIT_FQS(3)" above).
Given that 21024 is a bit larger than six (up to two rounds of RCU GP
kthread execution are required to notice the quiescent state), something
isn't quite right here.

The RCU GP kthread's stack is as follows:

[ 4135.979013] Call Trace:
[ 4135.979013]  [<ffffffff8189ae5a>] schedule+0x3a/0x90
[ 4135.979013]  [<ffffffff8189d608>] schedule_timeout+0x148/0x290
[ 4135.979013]  [<ffffffff810b3800>] ? trace_raw_output_itimer_expire+0x70/0x70
[ 4135.979013]  [<ffffffff810aea04>] rcu_gp_kthread+0x934/0x1010
[ 4135.979013]  [<ffffffff8108ae20>] ? prepare_to_wait_event+0xf0/0xf0
[ 4135.979013]  [<ffffffff810ae0d0>] ? rcu_barrier+0x20/0x20
[ 4135.979013]  [<ffffffff8106f4a4>] kthread+0xc4/0xe0
[ 4135.979013]  [<ffffffff8106f3e0>] ? kthread_create_on_node+0x170/0x170
[ 4135.979013]  [<ffffffff8189ea1f>] ret_from_fork+0x3f/0x70
[ 4135.979013]  [<ffffffff8106f3e0>] ? kthread_create_on_node+0x170/0x170

So it looks to be waiting with a timeout.

Which was why I got excited when my bisection appeared to converge on
timer-related commit.  Except that further testing found failures prior
to that commit, though arguably happening at a much lower rate.

I later learned that even the current kernel runs with no stalls if CPU
hotplug is disabled.  So now I am wondering if there is some race that
can happen when trying to awaken a task that last ran on a CPU that is
just now in the process of going online.  Or that is just in the process
of coming online, for that matter.

Hey, you asked!  ;-)

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ