[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1486129955.3993.6.camel@gmx.de>
Date: Fri, 03 Feb 2017 14:52:35 +0100
From: Mike Galbraith <efault@....de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Sachin Sant <sachinp@...ux.vnet.ibm.com>,
Ross Zwisler <zwisler@...il.com>,
Matt Fleming <matt@...eblueprint.co.uk>,
Michael Ellerman <mpe@...erman.id.au>,
"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
"linux-next@...r.kernel.org" <linux-next@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>
Subject: Re: [tip:sched/core] sched/core: Add debugging code to catch
missing update_rq_clock() calls
On Fri, 2017-02-03 at 14:37 +0100, Peter Zijlstra wrote:
> On Fri, Feb 03, 2017 at 01:59:34PM +0100, Mike Galbraith wrote:
> > FWIW, I'm not seeing stalls/hangs while beating hotplug up in tip. (so
> > next grew a wart?)
>
> I've seen it on tip. It looks like hot unplug goes really slow when
> there's running tasks on the CPU being taken down.
>
> What I did was something like:
>
> taskset -p $((1<<1)) $$
> for ((i=0; i<20; i++)) do while :; do :; done & done
>
> taskset -p $((1<<0)) $$
> echo 0 > /sys/devices/system/cpu/cpu1/online
>
> And with those 20 tasks stuck sucking cycles on CPU1, the unplug goes
> _really_ slow and the RCU stall triggers. What I suspect happens is that
> hotplug stops participating in the RCU state machine early, but only
> tells RCU about it really late, and in between it gets suspicious it
> takes too long.
Ah. I wasn't doing a really hard pounding, just running a couple
instances of Steven's script. To beat hell out of it, I add futextest,
stockfish and a small kbuild on a big box.
-Mike
Powered by blists - more mailing lists