[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070428152327.GE14716@in.ibm.com>
Date: Sat, 28 Apr 2007 20:53:27 +0530
From: Srivatsa Vaddagiri <vatsa@...ibm.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Con Kolivas <kernel@...ivas.org>,
Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Peter Williams <pwil3058@...pond.net.au>,
Thomas Gleixner <tglx@...utronix.de>, caglar@...dus.org.tr,
Willy Tarreau <w@....eu>,
Gene Heskett <gene.heskett@...il.com>, Mark Lord <lkml@....ca>,
Zach Carter <linux@...hcarter.com>,
buddabrod <buddabrod@...il.com>
Subject: Re: [patch] CFS scheduler, -v6
On Sat, Apr 28, 2007 at 03:53:38PM +0200, Ingo Molnar wrote:
> > Won't it help if you update rq->rb_leftmost above from the value
> > returned by rb_first(), so that subsequent calls to first_fair will be
> > sped up?
>
> yeah, indeed. Would you like to do a patch for that?
My pleasure :)
With the patch below applied, I ran a "time -p make -s -j10 bzImage"
test.
2.6.20 + cfs-v6 -> 186.45 (real)
2.6.20 + cfs-v6 + this_patch -> 184.55 (real)
or about ~1% improvement in real wall-clock time. This was with the default
sched_granularity_ns of 6000000. I suspect larger the value of
sched_granularity_ns and the number of (SCHED_NORMAL) tasks in system, better
the benefit from this caching.
Cache value returned by rb_first(), for faster subsequent lookups.
Signed-off-by : Srivatsa Vaddagiri <vatsa@...ibm.com>
---
diff -puN kernel/sched_fair.c~speedup kernel/sched_fair.c
--- linux-2.6.21/kernel/sched_fair.c~speedup 2007-04-28 19:28:08.000000000 +0530
+++ linux-2.6.21-vatsa/kernel/sched_fair.c 2007-04-28 19:34:55.000000000 +0530
@@ -86,7 +86,9 @@ static inline struct rb_node * first_fai
{
if (rq->rb_leftmost)
return rq->rb_leftmost;
- return rb_first(&rq->tasks_timeline);
+ /* Cache the value returned by rb_first() */
+ rq->rb_leftmost = rb_first(&rq->tasks_timeline);
+ return rq->rb_leftmost;
}
static struct task_struct * __pick_next_task_fair(struct rq *rq)
_
--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists