[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jhjeeiemlsw.mognet@arm.com>
Date: Thu, 21 Jan 2021 14:01:03 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Peter Zijlstra <peterz@...radead.org>, mingo@...nel.org,
tglx@...utronix.de
Cc: linux-kernel@...r.kernel.org, jiangshanlai@...il.com,
cai@...hat.com, vincent.donnefort@....com, decui@...rosoft.com,
paulmck@...nel.org, vincent.guittot@...aro.org,
rostedt@...dmis.org, tj@...nel.org, peterz@...radead.org
Subject: Re: [PATCH -v3 8/9] sched: Fix CPU hotplug / tighten is_per_cpu_kthread()
On 21/01/21 11:17, Peter Zijlstra wrote:
> @@ -7504,6 +7525,9 @@ int sched_cpu_deactivate(unsigned int cp
> * preempt-disabled and RCU users of this state to go away such that
> * all new such users will observe it.
> *
> + * Specifically, we rely on ttwu to no longer target this CPU, see
> + * ttwu_queue_cond() and is_cpu_allowed().
> + *
So the last time ttwu_queue_wakelist() can append a task onto a dying
CPU's wakelist is before sched_cpu_deactivate()'s synchronize_rcu()
returns.
As discussed on IRC, paranoia would have us issue a
flush_smp_call_function_from_idle()
upon returning from said sync, but this will require further surgery.
Do we want something like the below in the meantime? Ideally we'd warn on
setting rq->ttwu_pending when !cpu_active(), but as per the above this is
allowed before the synchronize_rcu() returns.
---
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ed6ff94aa68a..4b5b4b02ee64 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7590,6 +7590,7 @@ int sched_cpu_starting(unsigned int cpu)
*/
int sched_cpu_wait_empty(unsigned int cpu)
{
+ WARN_ON_ONCE(READ_ONCE(cpu_rq(cpu)->ttwu_pending));
balance_hotplug_wait();
return 0;
}
> * Do sync before park smpboot threads to take care the rcu boost case.
> */
> synchronize_rcu();
Powered by blists - more mailing lists