lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <157675913272.349305.8936736338884044103.stgit@localhost.localdomain>
Date:   Thu, 19 Dec 2019 15:39:14 +0300
From:   Kirill Tkhai <ktkhai@...tuozzo.com>
To:     mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        ktkhai@...tuozzo.com
Cc:     linux-kernel@...r.kernel.org
Subject: [PATCH RFC] sched: Micro optimization in pick_next_task() and in
 check_preempt_curr()

In kernel/sched/Makefile files, describing different sched classes, already
go in the order from the lowest priority class to the highest priority class:

idle.o fair.o rt.o deadline.o stop_task.o

The documentation of GNU linker says, that section appears in the order
they are seen during link time (see [1]):

>Normally, the linker will place files and sections matched by wildcards
>in the order in which they are seen during the link. You can change this
>by using the SORT keyword, which appears before a wildcard pattern
>in parentheses (e.g., SORT(.text*)).

So, we may expect const variables from idle.o will go before ro variables
from fair.o in RO_DATA section, while ro variables from fair.o will go
before ro variables from rt.o, etc.

(Also, it looks like the linking order is already used in kernel, e.g.
 in drivers/md/Makefile)

Thus, we may introduce an optimization based on xxx_sched_class addresses
in these two hot scheduler functions: pick_next_task() and check_preempt_curr().

One more result of the patch is that size of object file becomes a little
less (excluding added BUG_ON(), which goes in __init section):

$size kernel/sched/core.o
         text     data      bss	    dec	    hex	filename
before:  66446    18957	    676	  86079	  1503f	kernel/sched/core.o
after:   66398    18957	    676	  86031	  1500f	kernel/sched/core.o

[1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Using_ld_the_GNU_Linker/sections.html

Signed-off-by: Kirill Tkhai <ktkhai@...tuozzo.com>
---
 kernel/sched/Makefile |    2 ++
 kernel/sched/core.c   |   24 +++++++++---------------
 2 files changed, 11 insertions(+), 15 deletions(-)

diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
index 5fc9c9b70862..f78f177c660a 100644
--- a/kernel/sched/Makefile
+++ b/kernel/sched/Makefile
@@ -23,6 +23,8 @@ CFLAGS_core.o := $(PROFILING) -fno-omit-frame-pointer
 endif
 
 obj-y += core.o loadavg.o clock.o cputime.o
+# Order is significant: a more priority class xxx is described by variable
+# xxx_sched_class with a bigger address. See BUG_ON() in sched_init().
 obj-y += idle.o fair.o rt.o deadline.o
 obj-y += wait.o wait_bit.o swait.o completion.o
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 15508c202bf5..befdd7158b27 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1416,20 +1416,10 @@ static inline void check_class_changed(struct rq *rq, struct task_struct *p,
 
 void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
 {
-	const struct sched_class *class;
-
-	if (p->sched_class == rq->curr->sched_class) {
+	if (p->sched_class == rq->curr->sched_class)
 		rq->curr->sched_class->check_preempt_curr(rq, p, flags);
-	} else {
-		for_each_class(class) {
-			if (class == rq->curr->sched_class)
-				break;
-			if (class == p->sched_class) {
-				resched_curr(rq);
-				break;
-			}
-		}
-	}
+	else if (p->sched_class > rq->curr->sched_class)
+		resched_curr(rq);
 
 	/*
 	 * A queue event has occurred, and we're going to schedule.  In
@@ -3914,8 +3904,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 	 * higher scheduling class, because otherwise those loose the
 	 * opportunity to pull in more work from other CPUs.
 	 */
-	if (likely((prev->sched_class == &idle_sched_class ||
-		    prev->sched_class == &fair_sched_class) &&
+	if (likely(prev->sched_class <= &fair_sched_class &&
 		   rq->nr_running == rq->cfs.h_nr_running)) {
 
 		p = pick_next_task_fair(rq, prev, rf);
@@ -6569,6 +6558,11 @@ void __init sched_init(void)
 	unsigned long ptr = 0;
 	int i;
 
+	BUG_ON(&idle_sched_class > &fair_sched_class ||
+		&fair_sched_class > &rt_sched_class ||
+		&rt_sched_class > &dl_sched_class ||
+		&dl_sched_class > &stop_sched_class);
+
 	wait_bit_init();
 
 #ifdef CONFIG_FAIR_GROUP_SCHED


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ