[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151120100230.GA17308@twins.programming.kicks-ass.net>
Date: Fri, 20 Nov 2015 11:02:30 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Paul Turner <pjt@...gle.com>
Cc: Ingo Molnar <mingo@...nel.org>, Oleg Nesterov <oleg@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
boqun.feng@...il.com, Jonathan Corbet <corbet@....net>,
mhocko@...nel.org, dhowells@...hat.com,
Linus Torvalds <torvalds@...ux-foundation.org>,
will.deacon@....com
Subject: Re: [PATCH 2/4] sched: Document Program-Order guarantees
On Mon, Nov 02, 2015 at 12:27:05PM -0800, Paul Turner wrote:
> I suspect this part might be more explicitly expressed by specifying
> the requirements that migration satisfies; then providing an example.
> This makes it easier for others to reason about the locks and saves
> worrying about whether the examples hit our 3 million sub-cases.
Something like so then? Note that this patch (obviously) comes after
introducing smp_cond_acquire(), simplifying the RELEASE/ACQUIRE on
p->on_cpu.
---
Subject: sched: Document Program-Order guarantees
From: Peter Zijlstra <peterz@...radead.org>
Date: Tue Nov 17 19:01:11 CET 2015
These are some notes on the scheduler locking and how it provides
program order guarantees on SMP systems.
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Jonathan Corbet <corbet@....net>
Cc: Michal Hocko <mhocko@...nel.org>
Cc: David Howells <dhowells@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Will Deacon <will.deacon@....com>
Cc: Oleg Nesterov <oleg@...hat.com>
Cc: Boqun Feng <boqun.feng@...il.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/sched/core.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 91 insertions(+)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1916,6 +1916,97 @@ static void ttwu_queue(struct task_struc
raw_spin_unlock(&rq->lock);
}
+/*
+ * Notes on Program-Order guarantees on SMP systems.
+ *
+ * MIGRATION
+ *
+ * The basic program-order guarantee on SMP systems is that when a task [t]
+ * migrates, all its activity on its old cpu [c0] happens-before any subsequent
+ * execution on its new cpu [c1].
+ *
+ * For migration (of runnable tasks) this is provided by the following means:
+ *
+ * A) UNLOCK of the rq(c0)->lock scheduling out task t
+ * B) migration for t is required to synchronize *both* rq(c0)->lock and
+ * rq(c1)->lock (if not at the same time, then in that order).
+ * C) LOCK of the rq(c1)->lock scheduling in task
+ *
+ * Transitivity guarantees that B happens after A and C after B.
+ * Note: we only require RCpc transitivity.
+ * Note: the cpu doing B need not be c0 or c1
+ *
+ * Example:
+ *
+ * CPU0 CPU1 CPU2
+ *
+ * LOCK rq(0)->lock
+ * sched-out X
+ * sched-in Y
+ * UNLOCK rq(0)->lock
+ *
+ * LOCK rq(0)->lock // orders against CPU0
+ * dequeue X
+ * UNLOCK rq(0)->lock
+ *
+ * LOCK rq(1)->lock
+ * enqueue X
+ * UNLOCK rq(1)->lock
+ *
+ * LOCK rq(1)->lock // orders against CPU2
+ * sched-out Z
+ * sched-in X
+ * UNLOCK rq(1)->lock
+ *
+ *
+ * BLOCKING -- aka. SLEEP + WAKEUP
+ *
+ * For blocking we (obviously) need to provide the same guarantee as for
+ * migration. However the means are completely different as there is no lock
+ * chain to provide order. Instead we do:
+ *
+ * 1) smp_store_release(X->on_cpu, 0)
+ * 2) smp_cond_acquire(!X->on_cpu)
+ *
+ * Example:
+ *
+ * CPU0 (schedule) CPU1 (try_to_wake_up) CPU2 (schedule)
+ *
+ * LOCK rq(0)->lock LOCK X->pi_lock
+ * dequeue X
+ * sched-out X
+ * smp_store_release(X->on_cpu, 0);
+ *
+ * smp_cond_acquire(!X->on_cpu);
+ * X->state = WAKING
+ * set_task_cpu(X,2)
+ *
+ * LOCK rq(2)->lock
+ * enqueue X
+ * X->state = RUNNING
+ * UNLOCK rq(2)->lock
+ *
+ * LOCK rq(2)->lock // orders against CPU1
+ * sched-out Z
+ * sched-in X
+ * UNLOCK rq(1)->lock
+ *
+ * UNLOCK X->pi_lock
+ * UNLOCK rq(0)->lock
+ *
+ *
+ * However; for wakeups there is a second guarantee we must provide, namely we
+ * must observe the state that lead to our wakeup. That is, not only must our
+ * task observe its own prior state, it must also observe the stores prior to
+ * its wakeup.
+ *
+ * This means that any means of doing remote wakeups must order the CPU doing
+ * the wakeup against the CPU the task is going to end up running on. This,
+ * however, is already required for the regular Program-Order guarantee above,
+ * since the waking CPU is the one issueing the ACQUIRE (2).
+ *
+ */
+
/**
* try_to_wake_up - wake up a thread
* @p: the thread to be awakened
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists