lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170217162049.765efd4b@gandalf.local.home>
Date:   Fri, 17 Feb 2017 16:20:49 -0500
From:   "Steven Rostedt (VMware)" <rostedt@...dmis.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Clark Williams <williams@...hat.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: [PATCH] sched/rt: Add comments describing the RT IPI pull method


From: "Steven Rostedt (VMware)" <rostedt@...dmis.org>

While looking into optimizations for the RT scheduler IPI logic, I realized
that the comments are lacking to describe it efficiently. It deserves a
lengthy description describing its design.

Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
---
 kernel/sched/rt.c | 63 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 4101f9d..cef9579 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1928,6 +1928,69 @@ static int find_next_push_cpu(struct rq *rq)
 #define RT_PUSH_IPI_EXECUTING		1
 #define RT_PUSH_IPI_RESTART		2
 
+/*
+ * When a high priority task schedules out from a CPU and a lower priority
+ * task is scheduled in, a check is made to see if there's any RT tasks
+ * on other CPUs that are waiting to run because a higher priority RT task
+ * is currently running on its CPU. In this case, the CPU with multiple RT
+ * tasks queued on it (overloaded) needs to be notified that a CPU has opened
+*  up that may be able to run its sleeping RT task.
+ *
+ * Instead of taking the rq lock of the overloaded CPU, which could cause
+ * a huge contention on boxes with several cores, if lots of CPUs have just
+ * scheduled a lower priority task and there's only one overloaded CPU,
+ * these CPUs that are now running a lower priority task will simply send an
+ * IPI to the overloaded CPU with multiple RT tasks queued on it.
+ *
+ * Instead of sending an IPI to all CPUs that have overloaded RT tasks,
+ * only a single IPI is sent to one CPU. That one will try to push off its
+ * overloaded task and then send an IPI to the next CPU that has overloaded RT
+ * tasks. This stops when all CPUs with overloaded RT tasks have completed.
+ * Just because a CPU may have pushed off its own overloaded RT task does
+ * not mean it should stop sending the IPI around to other overloaded CPUs.
+ * There may be another RT task waiting to run on one of those CPUs that are
+ * of higher priority than the one that was just pushed.
+ *
+ * An optimization that could possibly be made is to make a CPU array similar
+ * to the cpupri array mask of all running RT tasks, but for the overloaded
+ * case, then the IPI could be sent to only the CPU with the highest priority
+ * RT task waiting, and that CPU could send off further IPIs to the CPU with
+ * the next highest waiting task. Since the overloaded case is much less likely
+ * to happen, the complexity of this implementation may not be worth it.
+ * Instead, just send an IPI around to all overloaded CPUs.
+ *
+ * The rq->rt.push_flags holds the status of the IPI that is going around.
+ * A run queue can only send out a single IPI at a time. The possible flags
+ * for rq->rt.push_flags are:
+ *
+ *    (None or zero):		No IPI is going around for the current rq
+ *    RT_PUSH_IPI_EXECUTING:	An IPI for the rq is being passed around
+ *    RT_PUSH_IPI_RESTART:	The priority of the running task for the rq
+ *				  has changed, and the IPI should restart
+ *				  circulating the overloaded CPUs again.
+ *
+ * rq->rt.push_cpu contains the CPU that is being sent the IPI. It is updated
+ * before sending to the next CPU.
+ *
+ * When an rq schedules a lower priority task than what was currently
+ * running, the next CPU with overloaded RT tasks is examined first.
+ * That is, if CPU 1 and 5 are overloaded, and CPU 3 schedules a lower
+ * priority task, it will send an IPI first to CPU 5, then CPU 5 will
+ * send to CPU 1 if it is still overloaded. CPU 1 will clear the
+ * rq->rt.push_flags if RT_PUSH_IPI_RESTART is not set.
+ *
+ * The first CPU to notice IPI_RESTART is set, will clear that flag and then
+ * send an IPI to the next overloaded CPU after the rq->cpu and not the next
+ * CPU after push_cpu. That is, if CPU 1, 4 and 5 are overloaded when CPU 3
+ * schedules a lower priority task, and the IPI_RESTART gets set while the
+ * handling is being done on CPU 5, it will clear the flag and send it back to
+ * CPU 4 instead of CPU 1.
+ *
+ * Note, the above logic can be disabled by turning off the sched_feature
+ *  RT_PUSH_IPI. Then the rq lock of the overloaded CPU will simply be
+ *  taken by the CPU requesting a pull and the waiting RT task will be pulled
+ *  by that CPU. This may be fine for machines with few CPUs.
+ */
 static void tell_cpu_to_push(struct rq *rq)
 {
 	int cpu;
-- 
2.9.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ