[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080218082021.GD14419@linux.vnet.ibm.com>
Date: Mon, 18 Feb 2008 13:50:21 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: Mike Galbraith <efault@....de>
Cc: Ingo Molnar <mingo@...e.hu>,
Lukas Hejtmanek <xhejtman@....muni.cz>, mingo@...hat.com,
linux-kernel@...r.kernel.org, a.p.zijlstra@...llo.nl,
dhaval@...ux.vnet.ibm.com, aneesh.kumar@...ibm.com
Subject: Re: 2.6.24-git4+ regression
On Mon, Feb 18, 2008 at 08:38:24AM +0100, Mike Galbraith wrote:
> Here, it does not. It seems fine without CONFIG_FAIR_GROUP_SCHED.
My hunch is its because of the vruntime driven preemption which shoots
up latencies (and the fact perhaps that Peter hasnt't focused more on SMP case
yet!).
Curiously, do you observe the same results when in UP mode (maxcpus=1)?
> Oddity: mainline git with Srivatsa's test patch improves markedly, and
> using sched_latency_ns and sched_wakeup_granularity_ns, I can tweak the
Lukas,
Does tweaking these make any difference for you?
# echo 10000000 > /proc/sys/kernel/sched_latency_ns
# echo 10000000 > /proc/sys/kernel/sched_wakeup_granularity_ns
[or some other lower value]
> regression into submission. With sched-devel, I cannot tweak it away
> with or without the test patch. Dunno how useful that info is.
FWIW, my test patch I had sent earlier didnt address the needs of UP, as Peter
pointed me out. In that direction, I had done more experimentation with the
patch below, which seemed to improve UP latencies also. Note that I
don't particularly like the first hunk below, perhaps it needs to be
surrounded by an if(something) ..
---
kernel/sched_fair.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
Index: current/kernel/sched_fair.c
===================================================================
--- current.orig/kernel/sched_fair.c
+++ current/kernel/sched_fair.c
@@ -523,8 +523,6 @@ place_entity(struct cfs_rq *cfs_rq, stru
if (sched_feat(NEW_FAIR_SLEEPERS))
vruntime -= sysctl_sched_latency;
- /* ensure we never gain time by being placed backwards. */
- vruntime = max_vruntime(se->vruntime, vruntime);
}
se->vruntime = vruntime;
@@ -816,6 +814,13 @@ hrtick_start_fair(struct rq *rq, struct
}
#endif
+static inline void dequeue_stack(struct sched_entity *se)
+{
+ for_each_sched_entity(se)
+ if (se->on_rq)
+ dequeue_entity(cfs_rq_of(se), se, 0);
+}
+
/*
* The enqueue_task method is called before nr_running is
* increased. Here we update the fair scheduling stats and
@@ -828,6 +833,9 @@ static void enqueue_task_fair(struct rq
*topse = NULL; /* Highest schedulable entity */
int incload = 1;
+ if (wakeup)
+ dequeue_stack(se);
+
for_each_sched_entity(se) {
topse = se;
if (se->on_rq) {
P.S : Sorry about slow responses, since I am now in a different project :(
--
Regards,
vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists