lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 13 Jun 2016 19:36:37 +0100
From:	Ben Hutchings <ben@...adent.org.uk>
To:	linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC:	akpm@...ux-foundation.org, rostedt@...dmis.org,
	ktkhai@...allels.com, wanpeng.li@...ux.intel.com, oleg@...hat.com,
	"Thomas Gleixner" <tglx@...utronix.de>,
	"Byungchul Park" <byungchul.park@....com>,
	umgwanakikbuti@...il.com, juri.lelli@...il.com,
	pang.xunlei@...aro.org, "Peter Zijlstra" <peterz@...radead.org>
Subject: [PATCH 3.16 113/114] sched,dl: Remove return value from 
 pull_dl_task()

3.16.36-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <peterz@...radead.org>

commit 0ea60c2054fc3b0c3eb68ac4f6884f3ee78d9925 upstream.

In order to be able to use pull_dl_task() from a callback, we need to
do away with the return value.

Since the return value indicates if we should reschedule, do this
inside the function. Since not all callers currently do this, this can
increase the number of reschedules due rt balancing.

Too many reschedules is not a correctness issues, too few are.

Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: ktkhai@...allels.com
Cc: rostedt@...dmis.org
Cc: juri.lelli@...il.com
Cc: pang.xunlei@...aro.org
Cc: oleg@...hat.com
Cc: wanpeng.li@...ux.intel.com
Cc: umgwanakikbuti@...il.com
Link: http://lkml.kernel.org/r/20150611124742.859398977@infradead.org
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
[Conflicts: kernel/sched/deadline.c]
Signed-off-by: Byungchul Park <byungchul.park@....com>
[bwh: Backported to 3.16: use resched_task() instead of resched_curr()]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
 kernel/sched/deadline.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -252,9 +252,8 @@ static inline bool need_pull_dl_task(str
 	return false;
 }
 
-static inline int pull_dl_task(struct rq *rq)
+static inline void pull_dl_task(struct rq *rq)
 {
-	return 0;
 }
 
 static inline void queue_push_tasks(struct rq *rq)
@@ -957,7 +956,7 @@ static void check_preempt_equal_dl(struc
 	resched_task(rq->curr);
 }
 
-static int pull_dl_task(struct rq *this_rq);
+static void pull_dl_task(struct rq *this_rq);
 
 #endif /* CONFIG_SMP */
 
@@ -1380,15 +1379,16 @@ static void push_dl_tasks(struct rq *rq)
 		;
 }
 
-static int pull_dl_task(struct rq *this_rq)
+static void pull_dl_task(struct rq *this_rq)
 {
-	int this_cpu = this_rq->cpu, ret = 0, cpu;
+	int this_cpu = this_rq->cpu, cpu;
 	struct task_struct *p;
+	bool resched = false;
 	struct rq *src_rq;
 	u64 dmin = LONG_MAX;
 
 	if (likely(!dl_overloaded(this_rq)))
-		return 0;
+		return;
 
 	/*
 	 * Match the barrier from dl_set_overloaded; this guarantees that if we
@@ -1443,7 +1443,7 @@ static int pull_dl_task(struct rq *this_
 					   src_rq->curr->dl.deadline))
 				goto skip;
 
-			ret = 1;
+			resched = true;
 
 			deactivate_task(src_rq, p, 0);
 			set_task_cpu(p, this_cpu);
@@ -1456,7 +1456,8 @@ skip:
 		double_unlock_balance(this_rq, src_rq);
 	}
 
-	return ret;
+	if (resched)
+		resched_task(this_rq->curr);
 }
 
 /*

Powered by blists - more mailing lists