[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201117083016.GK3121392@hirez.programming.kicks-ass.net>
Date: Tue, 17 Nov 2020 09:30:16 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Will Deacon <will@...nel.org>, Davidlohr Bueso <dave@...olabs.net>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: [PATCH] sched: Fix data-race in wakeup
On Mon, Nov 16, 2020 at 07:31:49PM +0000, Mel Gorman wrote:
> And this works.
Yay!
> sched_psi_wake_requeue can probably stay with the other three fields
> given they are under the rq lock but sched_remote_wakeup needs to move
> out.
I _think_ we can move the bit into the unserialized section below.
It's a bit cheecky, but it should work I think because the only time we
actually use this bit, we're guaranteed the task isn't actually running,
so current doesn't exist.
I suppose the question is wether this is worth saving 31 bits over...
How's this?
---
Subject: sched: Fix data-race in wakeup
From: Peter Zijlstra <peterz@...radead.org>
Date: Tue Nov 17 09:08:41 CET 2020
Mel reported that on some ARM64 platforms loadavg goes bananas and
tracked it down to the following data race:
CPU0 CPU1
schedule()
prev->sched_contributes_to_load = X;
deactivate_task(prev);
try_to_wake_up()
if (p->on_rq &&) // false
if (smp_load_acquire(&p->on_cpu) && // true
ttwu_queue_wakelist())
p->sched_remote_wakeup = Y;
smp_store_release(prev->on_cpu, 0);
where both p->sched_contributes_to_load and p->sched_remote_wakeup are
in the same word, and thus the stores X and Y race (and can clobber
one another's data).
Whereas prior to commit c6e7bd7afaeb ("sched/core: Optimize ttwu()
spinning on p->on_cpu") the p->on_cpu handoff serialized access to
p->sched_remote_wakeup (just as it still does with
p->sched_contributes_to_load) that commit broke that by calling
ttwu_queue_wakelist() with p->on_cpu != 0.
However, due to
p->XXX ttwu()
schedule() if (p->on_rq && ...) // false
smp_mb__after_spinlock() if (smp_load_acquire(&p->on_cpu) &&
deactivate_task() ttwu_queue_wakelist())
p->on_rq = 0; p->sched_remote_wakeup = X;
We can be sure any 'current' store is complete and 'current' is
guaranteed asleep. Therefore we can move p->sched_remote_wakeup into
the current flags word.
Note: while the observed failure was loadavg accounting gone wrong due
to ttwu() cobbering p->sched_contributes_to_load, the reverse problem
is also possible where schedule() clobbers p->sched_remote_wakeup,
this could result in enqueue_entity() wrecking ->vruntime and causing
scheduling artifacts.
Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
Reported-by: Mel Gorman <mgorman@...hsingularity.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
include/linux/sched.h | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -775,7 +775,6 @@ struct task_struct {
unsigned sched_reset_on_fork:1;
unsigned sched_contributes_to_load:1;
unsigned sched_migrated:1;
- unsigned sched_remote_wakeup:1;
#ifdef CONFIG_PSI
unsigned sched_psi_wake_requeue:1;
#endif
@@ -785,6 +784,18 @@ struct task_struct {
/* Unserialized, strictly 'current' */
+ /*
+ * p->in_iowait = 1; ttwu()
+ * schedule() if (p->on_rq && ..) // false
+ * smp_mb__after_spinlock(); if (smp_load_acquire(&p->on_cpu) && //true
+ * deactivate_task() ttwu_queue_wakelist())
+ * p->on_rq = 0; p->sched_remote_wakeup = X;
+ *
+ * Guarantees all stores of 'current' are visible before
+ * ->sched_remote_wakeup gets used.
+ */
+ unsigned sched_remote_wakeup:1;
+
/* Bit to tell LSMs we're in execve(): */
unsigned in_execve:1;
unsigned in_iowait:1;
Powered by blists - more mailing lists