[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110728175625.1595581wucy2hvr4@webmail.oregonstate.edu>
Date: Thu, 28 Jul 2011 17:56:25 -0700
From: uberj@...d.orst.edu
To: Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH] Fix to excess pre-schedule migrating during Real Time
overload on multiple CPUs.
From cf87d70969357f46e5c28804ac39c2139652af8f Mon Sep 17 00:00:00 2001
From: Jacques Uber <uberj@...d.orst.edu>
Date: Thu, 28 Jul 2011 17:49:38 -0700
Subject: [PATCH] Fix to excess pre-schedule migrating during Real Time
overload on multiple CPUs.
If multiple processors are under real time overload there are situations where
a processor will pull multiple tasks from multiple CPU's while trying to
schedule the highest priority task within it's root domain. As an example, if
there are four CPU's all under overload and the highest priority task on CPU3
yields its time slice, CPU3 will try to pull the next highest priority
task onto
its the runqueue. Without the patch, if tasks were ordered in ascending order
on CPU[0-2], CPU3 would pull all the tasks, ruining any possibility of using
cached data.
The simple solution to this scenario is to maintain a "best choice"
for CPU3 to
pull to it's run queue. Only after checking all the queued tasks and choosing
which one should be ran, will it deactivate, set_task, and reactivate the task
onto it's runqueue.
This patch was inspired by a Linux Journal article about the Real Time
Scheduler.
http://www.linuxjournal.com/magazine/real-time-linux-kernel-scheduler?page=0,3
Signed-off-by: Jacques Uber <uberj@...d.orst.edu>
Signed-off-by: Kevin Strasser <strassek@...d.orst.edu>
---
kernel/sched_rt.c | 11 ++++++++---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 97540f0..067f159 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -1482,6 +1482,7 @@ static int pull_rt_task(struct rq *this_rq)
{
int this_cpu = this_rq->cpu, ret = 0, cpu;
struct task_struct *p;
+ struct task_struct *best = NULL;
struct rq *src_rq;
if (likely(!rt_overloaded(this_rq)))
@@ -1540,9 +1541,10 @@ static int pull_rt_task(struct rq *this_rq)
ret = 1;
- deactivate_task(src_rq, p, 0);
- set_task_cpu(p, this_cpu);
- activate_task(this_rq, p, 0);
+ if(!best)
+ best = p;
+ else if(p->prio > best->prio)
+ best = p;
/*
* We continue with the search, just in
* case there's an even higher prio task
@@ -1550,6 +1552,9 @@ static int pull_rt_task(struct rq *this_rq)
* but possible)
*/
}
+ deactivate_task(src_rq, p, 0);
+ set_task_cpu(p, this_cpu);
+ activate_task(this_rq, p, 0);
skip:
double_unlock_balance(this_rq, src_rq);
}
--
1.7.5.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists