[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070428152802.GA10179@in.ibm.com>
Date: Sat, 28 Apr 2007 20:58:02 +0530
From: Srivatsa Vaddagiri <vatsa@...ibm.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Con Kolivas <kernel@...ivas.org>,
Nick Piggin <npiggin@...e.de>, Mike Galbraith <efault@....de>,
Arjan van de Ven <arjan@...radead.org>,
Peter Williams <pwil3058@...pond.net.au>,
Thomas Gleixner <tglx@...utronix.de>, caglar@...dus.org.tr,
Willy Tarreau <w@....eu>,
Gene Heskett <gene.heskett@...il.com>, Mark Lord <lkml@....ca>,
Zach Carter <linux@...hcarter.com>,
buddabrod <buddabrod@...il.com>
Subject: Re: [patch] CFS scheduler, -v6
On Sat, Apr 28, 2007 at 08:53:27PM +0530, Srivatsa Vaddagiri wrote:
> With the patch below applied, I ran a "time -p make -s -j10 bzImage"
> test.
On a 4CPU (accounting HT) Intel Xeon 3.6GHz box
>
> 2.6.20 + cfs-v6 -> 186.45 (real)
> 2.6.20 + cfs-v6 + this_patch -> 184.55 (real)
>
> or about ~1% improvement in real wall-clock time. This was with the default
> sched_granularity_ns of 6000000. I suspect larger the value of
> sched_granularity_ns and the number of (SCHED_NORMAL) tasks in system, better
> the benefit from this caching.
--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists