[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200720142105.GR10769@hirez.programming.kicks-ass.net>
Date: Mon, 20 Jul 2020 16:21:05 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Jiri Slaby <jirislaby@...nel.org>,
Christian Brauner <christian.brauner@...ntu.com>,
christian@...uner.io, "Eric W. Biederman" <ebiederm@...ssion.com>,
Linux kernel mailing list <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...e.de>,
Dave Jones <davej@...emonkey.org.uk>,
Paul Gortmaker <paul.gortmaker@...driver.com>
Subject: Re: 5.8-rc*: kernel BUG at kernel/signal.c:1917
On Mon, Jul 20, 2020 at 04:02:24PM +0200, Oleg Nesterov wrote:
> I have to admit, I do not understand the usage of prev_state in schedule(),
> it looks really, really subtle...
Right, so commit dbfb089d360 solved a problem where schedule() re-read
prev->state vs prev->on_rq = 0. That is, schedule()'s dequeue and
ttwu()'s enqueue disagreed over sched_contributes_to_load. and as a
result load-accounting went wobbly.
Now, looking at that commit again, I might've solved the problem twice
:-P
So on the one hand, I provider ordering:
LOAD p->state LOAD-ACQUIRE p->on_rq == 0
MB
STORE p->on_rq, 0 STORE p->state, TASK_WAKING
such that ttwu() will only change p->state, after on_rq==0, which is
after loading p->state in schedule().
At the same time, I also had schedule() set
p->sched_contributes_to_load once, and then consistently used that value
throughout, without ever looking at p->state again, which too makes it
much harder to mess load-avg up.
Now, the ordering in schedule(), relies on doing the p->state load
before:
spin_lock(rq->lock)
smp_mb__after_spinlock();
and doing a re-load check after, with the assumption that if the reload
is different, it will not block.
That said, in a crossed email, I just proposed we could simplify all
this like so.. but now I need to go ask people to re-validate that
loadavg muck again :-/
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a2a244af9a53..437fc3b241f2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4193,9 +4193,6 @@ static void __sched notrace __schedule(bool preempt)
local_irq_disable();
rcu_note_context_switch(preempt);
- /* See deactivate_task() below. */
- prev_state = prev->state;
-
/*
* Make sure that signal_pending_state()->signal_pending() below
* can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
@@ -4223,7 +4220,8 @@ static void __sched notrace __schedule(bool preempt)
* We must re-load prev->state in case ttwu_remote() changed it
* before we acquired rq->lock.
*/
- if (!preempt && prev_state && prev_state == prev->state) {
+ prev_state = prev->state;
+ if (!preempt && prev_state) {
if (signal_pending_state(prev_state, prev)) {
prev->state = TASK_RUNNING;
} else {
@@ -4237,10 +4235,12 @@ static void __sched notrace __schedule(bool preempt)
/*
* __schedule() ttwu()
- * prev_state = prev->state; if (READ_ONCE(p->on_rq) && ...)
- * LOCK rq->lock goto out;
- * smp_mb__after_spinlock(); smp_acquire__after_ctrl_dep();
- * p->on_rq = 0; p->state = TASK_WAKING;
+ * if (prev_state) if (p->on_rq && ...)
+ * p->on_rq = 0; goto out;
+ * smp_acquire__after_ctrl_dep();
+ * p->state = TASK_WAKING
+ *
+ * Where __schedule() and ttwu() have matching control dependencies.
*
* After this, schedule() must not care about p->state any more.
*/
Powered by blists - more mailing lists