lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 30 Jan 2017 03:54:24 -0800
From:   tip-bot for Peter Zijlstra <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     mingo@...nel.org, rostedt@...dmis.org,
        linux-kernel@...r.kernel.org, efault@....de,
        torvalds@...ux-foundation.org, peterz@...radead.org,
        akpm@...ux-foundation.org, tglx@...utronix.de, hpa@...or.com
Subject: [tip:sched/core] sched/core: Optimize pick_next_task() for
 idle_sched_class

Commit-ID:  49ee576809d837442624ac18804b07943267cd57
Gitweb:     http://git.kernel.org/tip/49ee576809d837442624ac18804b07943267cd57
Author:     Peter Zijlstra <peterz@...radead.org>
AuthorDate: Thu, 19 Jan 2017 18:44:08 +0100
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 30 Jan 2017 11:46:34 +0100

sched/core: Optimize pick_next_task() for idle_sched_class

Steve noticed that when we switch from IDLE to SCHED_OTHER we fail to
take the shortcut, even though all runnable tasks are of the fair
class, because prev->sched_class != &fair_sched_class.

Since I reworked the put_prev_task() stuff, we don't really care about
prev->class here, so removing that condition will allow this case.

This increases the likely case from 78% to 98% correct for Steve's
workload.

Reported-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Tested-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/20170119174408.GN6485@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 kernel/sched/core.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 49ce1cb..51ca21e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3321,15 +3321,14 @@ static inline void schedule_debug(struct task_struct *prev)
 static inline struct task_struct *
 pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 {
-	const struct sched_class *class = &fair_sched_class;
+	const struct sched_class *class;
 	struct task_struct *p;
 
 	/*
 	 * Optimization: we know that if all tasks are in
 	 * the fair class we can call that function directly:
 	 */
-	if (likely(prev->sched_class == class &&
-		   rq->nr_running == rq->cfs.h_nr_running)) {
+	if (likely(rq->nr_running == rq->cfs.h_nr_running)) {
 		p = fair_sched_class.pick_next_task(rq, prev, rf);
 		if (unlikely(p == RETRY_TASK))
 			goto again;

Powered by blists - more mailing lists