[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1364477865-1777-1-git-send-email-muming.wq@gmail.com>
Date: Thu, 28 Mar 2013 21:37:45 +0800
From: Charles Wang <muming.wq@...il.com>
To: mingo@...nel.org, gaoyang.zyh@...bao.com
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH] sched: Precise load checking in get_rr_interval_fair
From: Charles Wang <muming.wq@...bao.com>
Positive load weight of rq.cfs can not represent positive load weight
of se->cfs_rq. And when se->cfs_rq's load is 0, the slice calculated
by sched_slice is not that sensible.
Use se->cfs_rq for load checking instead of rq->cfs. And correct the
comments.
Cc: Ingo Molnar <mingo@...e.hu>
Cc: Zhu Yanhai <gaoyang.zyh@...bao.com>
Signed-off-by: Charles Wang <muming.wq@...bao.com>
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 539760e..5d58ac9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6086,14 +6086,15 @@ void unregister_fair_sched_group(struct task_group *tg, int cpu) { }
static unsigned int get_rr_interval_fair(struct rq *rq, struct task_struct *task)
{
struct sched_entity *se = &task->se;
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
unsigned int rr_interval = 0;
/*
* Time slice is 0 for SCHED_OTHER tasks that are on an otherwise
- * idle runqueue:
+ * idle cfs_rq:
*/
- if (rq->cfs.load.weight)
- rr_interval = NS_TO_JIFFIES(sched_slice(cfs_rq_of(se), se));
+ if (cfs_rq->load.weight)
+ rr_interval = NS_TO_JIFFIES(sched_slice(cfs_rq, se));
return rr_interval;
}
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists