[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170919153126.GA2955@linux.vnet.ibm.com>
Date: Tue, 19 Sep 2017 08:31:26 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Neeraj Upadhyay <neeraju@...eaurora.org>, josh@...htriplett.org,
mathieu.desnoyers@...icios.com, jiangshanlai@...il.com,
linux-kernel@...r.kernel.org, sramana@...eaurora.org,
prsood@...eaurora.org, pkondeti@...eaurora.org,
markivx@...eaurora.org, peterz@...radead.org
Subject: Re: Query regarding synchronize_sched_expedited and resched_cpu
On Mon, Sep 18, 2017 at 09:24:12AM -0700, Paul E. McKenney wrote:
> On Mon, Sep 18, 2017 at 12:12:13PM -0400, Steven Rostedt wrote:
> > On Mon, 18 Sep 2017 09:01:25 -0700
> > "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> >
> >
> > > sched: Make resched_cpu() unconditional
> > >
> > > The current implementation of synchronize_sched_expedited() incorrectly
> > > assumes that resched_cpu() is unconditional, which it is not. This means
> > > that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
> > > fails as follows (analysis by Neeraj Upadhyay):
> > >
> > > o CPU1 is waiting for expedited wait to complete:
> > > sync_rcu_exp_select_cpus
> > > rdp->exp_dynticks_snap & 0x1 // returns 1 for CPU5
> > > IPI sent to CPU5
> > >
> > > synchronize_sched_expedited_wait
> > > ret = swait_event_timeout(
> > > rsp->expedited_wq,
> > > sync_rcu_preempt_exp_done(rnp_root),
> > > jiffies_stall);
> > >
> > > expmask = 0x20 , and CPU 5 is in idle path (in cpuidle_enter())
> > >
> > > o CPU5 handles IPI and fails to acquire rq lock.
> > >
> > > Handles IPI
> > > sync_sched_exp_handler
> > > resched_cpu
> > > returns while failing to try lock acquire rq->lock
> > > need_resched is not set
> > >
> > > o CPU5 calls rcu_idle_enter() and as need_resched is not set, goes to
> > > idle (schedule() is not called).
> > >
> > > o CPU 1 reports RCU stall.
> > >
> > > Given that resched_cpu() is used only by RCU, this commit fixes the
> > > assumption by making resched_cpu() unconditional.
> >
> > Probably want to run this with several workloads with lockdep enabled
> > first.
>
> As soon as I work through the backlog of lockdep complaints that
> appeared in the last merge window... :-(
And this patch survived all rcutorture scenarios, including those with
lockdep enabled. There were failures, but these are pre-existing issues
I am chasing: Lost timeouts on TREE01 and rt_mutex trying to awaken
an offline CPU in TREE03.
So I have this one queued. Objections?
Thanx, Paul
------------------------------------------------------------------------
commit bc43e2e7e08134e6f403ac845edcf4f85668d803
Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Date: Mon Sep 18 08:54:40 2017 -0700
sched: Make resched_cpu() unconditional
The current implementation of synchronize_sched_expedited() incorrectly
assumes that resched_cpu() is unconditional, which it is not. This means
that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
fails as follows (analysis by Neeraj Upadhyay):
o CPU1 is waiting for expedited wait to complete:
sync_rcu_exp_select_cpus
rdp->exp_dynticks_snap & 0x1 // returns 1 for CPU5
IPI sent to CPU5
synchronize_sched_expedited_wait
ret = swait_event_timeout(
rsp->expedited_wq,
sync_rcu_preempt_exp_done(rnp_root),
jiffies_stall);
expmask = 0x20 , and CPU 5 is in idle path (in cpuidle_enter())
o CPU5 handles IPI and fails to acquire rq lock.
Handles IPI
sync_sched_exp_handler
resched_cpu
returns while failing to try lock acquire rq->lock
need_resched is not set
o CPU5 calls rcu_idle_enter() and as need_resched is not set, goes to
idle (schedule() is not called).
o CPU 1 reports RCU stall.
Given that resched_cpu() is used only by RCU, this commit fixes the
assumption by making resched_cpu() unconditional.
Reported-by: Neeraj Upadhyay <neeraju@...eaurora.org>
Suggested-by: Neeraj Upadhyay <neeraju@...eaurora.org>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index cab8c5ec128e..b2281971894c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -505,8 +505,7 @@ void resched_cpu(int cpu)
struct rq *rq = cpu_rq(cpu);
unsigned long flags;
- if (!raw_spin_trylock_irqsave(&rq->lock, flags))
- return;
+ raw_spin_lock_irqsave(&rq->lock, flags);
resched_curr(rq);
raw_spin_unlock_irqrestore(&rq->lock, flags);
}
Powered by blists - more mailing lists