[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160328062547.GD6344@twins.programming.kicks-ass.net>
Date: Mon, 28 Mar 2016 08:25:47 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
"Chatre, Reinette" <reinette.chatre@...el.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
Josh Triplett <josh@...htriplett.org>,
Ross Green <rgkernel@...il.com>,
John Stultz <john.stultz@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
lkml <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>, dipankar@...ibm.com,
Andrew Morton <akpm@...ux-foundation.org>,
rostedt <rostedt@...dmis.org>,
David Howells <dhowells@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Darren Hart <dvhart@...ux.intel.com>,
Frédéric Weisbecker <fweisbec@...il.com>,
Oleg Nesterov <oleg@...hat.com>,
pranith kumar <bobby.prani@...il.com>
Subject: Re: rcu_preempt self-detected stall on CPU from 4.5-rc3, since 3.17
On Sun, Mar 27, 2016 at 02:06:41PM -0700, Paul E. McKenney wrote:
> > But, you need hotplug for this to happen, right?
>
> I do, but Ross Green is seeing something that looks similar, and without
> CPU hotplug.
Yes, but that's two differences so far, you need hotplug and he's on ARM
(which doesn't have TIF_POLLING_NR).
So either we're all looking at the wrong thing or these really are two
different issues.
> > We should not be migrating towards, or waking on, CPUs no longer present
> > in cpu_active_map, and there is a rcu/sched_sync() after clearing that
> > bit. Furthermore, migration_call() does a sched_ttwu_pending() (waking
> > any remaining stragglers) before we migrate all runnable tasks off the
> > dying CPU.
>
> OK, so I should instrument migration_call() if I get the repro rate up?
Can do, maybe try the below first. (yes I know how long it all takes :/)
> > The other interesting case would be resched_cpu(), which uses
> > set_nr_and_not_polling() to kick a remote cpu to call schedule(). It
> > atomically sets TIF_NEED_RESCHED and returns if TIF_POLLING_NRFLAG was
> > not set. If indeed not, it will send an IPI.
> >
> > This assumes the idle 'exit' path will do the same as the IPI does; and
> > if you look at cpu_idle_loop() it does indeed do both
> > preempt_fold_need_resched() and sched_ttwu_pending().
> >
> > Note that one cannot rely on irq_enter()/irq_exit() being called for the
> > scheduler IPI.
>
> OK, thank you for the info! Any specific debug actions?
Dunno, something like the below should bring visibility into the
(lockless) wake_list thingy.
So these trace_printk()s should happen between trace_sched_waking() and
trace_sched_wakeup() (I've not fully read the thread, but ISTR you had
some traces with these here thingies on).
---
arch/x86/include/asm/bitops.h | 6 ++++--
kernel/sched/core.c | 9 +++++++++
2 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 7766d1cf096e..5345784d5e41 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -112,11 +112,13 @@ clear_bit(long nr, volatile unsigned long *addr)
if (IS_IMMEDIATE(nr)) {
asm volatile(LOCK_PREFIX "andb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)~CONST_MASK(nr)));
+ : "iq" ((u8)~CONST_MASK(nr))
+ : "memory");
} else {
asm volatile(LOCK_PREFIX "btr %1,%0"
: BITOP_ADDR(addr)
- : "Ir" (nr));
+ : "Ir" (nr)
+ : "memory");
}
}
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0b21e7a724e1..b446f73c530d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1669,6 +1669,7 @@ void sched_ttwu_pending(void)
while (llist) {
p = llist_entry(llist, struct task_struct, wake_entry);
llist = llist_next(llist);
+ trace_printk("waking %d\n", p->pid);
ttwu_do_activate(rq, p, 0);
}
@@ -1719,6 +1720,7 @@ static void ttwu_queue_remote(struct task_struct *p, int cpu)
struct rq *rq = cpu_rq(cpu);
if (llist_add(&p->wake_entry, &cpu_rq(cpu)->wake_list)) {
+ trace_printk("queued %d for waking on %d\n", p->pid, cpu);
if (!set_nr_if_polling(rq->idle))
smp_send_reschedule(cpu);
else
@@ -5397,10 +5399,17 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
migrate_tasks(rq);
BUG_ON(rq->nr_running != 1); /* the migration thread */
raw_spin_unlock_irqrestore(&rq->lock, flags);
+
+ /* really bad m'kay */
+ WARN_ON(!llist_empty(&rq->wake_list));
+
break;
case CPU_DEAD:
calc_load_migrate(rq);
+
+ /* more bad */
+ WARN_ON(!llist_empty(&rq->wake_list));
break;
#endif
}
Powered by blists - more mailing lists