lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 01 Mar 2016 23:51:05 +0000 From: Greg Kroah-Hartman <gregkh@...uxfoundation.org> To: <linux-kernel@...r.kernel.org> Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, <stable@...r.kernel.org>, "Peter Zijlstra (Intel)" <peterz@...radead.org>, <ktkhai@...allels.com>, <rostedt@...dmis.org>, <juri.lelli@...il.com>, <pang.xunlei@...aro.org>, <oleg@...hat.com>, <wanpeng.li@...ux.intel.com>, <umgwanakikbuti@...il.com>, Thomas Gleixner <tglx@...utronix.de>, Byungchul Park <byungchul.park@....com> Subject: [PATCH 3.14 047/130] sched,dl: Remove return value from pull_dl_task() 3.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Zijlstra <peterz@...radead.org> commit 0ea60c2054fc3b0c3eb68ac4f6884f3ee78d9925 upstream. In order to be able to use pull_dl_task() from a callback, we need to do away with the return value. Since the return value indicates if we should reschedule, do this inside the function. Since not all callers currently do this, this can increase the number of reschedules due rt balancing. Too many reschedules is not a correctness issues, too few are. Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> Cc: ktkhai@...allels.com Cc: rostedt@...dmis.org Cc: juri.lelli@...il.com Cc: pang.xunlei@...aro.org Cc: oleg@...hat.com Cc: wanpeng.li@...ux.intel.com Cc: umgwanakikbuti@...il.com Link: http://lkml.kernel.org/r/20150611124742.859398977@infradead.org Signed-off-by: Thomas Gleixner <tglx@...utronix.de> Signed-off-by: Byungchul Park <byungchul.park@....com> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org> --- kernel/sched/deadline.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1351,15 +1351,16 @@ static void push_dl_tasks(struct rq *rq) ; } -static int pull_dl_task(struct rq *this_rq) +static void pull_dl_task(struct rq *this_rq) { - int this_cpu = this_rq->cpu, ret = 0, cpu; + int this_cpu = this_rq->cpu, cpu; struct task_struct *p; + bool resched = false; struct rq *src_rq; u64 dmin = LONG_MAX; if (likely(!dl_overloaded(this_rq))) - return 0; + return; /* * Match the barrier from dl_set_overloaded; this guarantees that if we @@ -1414,7 +1415,7 @@ static int pull_dl_task(struct rq *this_ src_rq->curr->dl.deadline)) goto skip; - ret = 1; + resched = true; deactivate_task(src_rq, p, 0); set_task_cpu(p, this_cpu); @@ -1427,7 +1428,8 @@ skip: double_unlock_balance(this_rq, src_rq); } - return ret; + if (resched) + resched_task(this_rq->curr); } static void pre_schedule_dl(struct rq *rq, struct task_struct *prev)
Powered by blists - more mailing lists