[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070424082305.GA6332@elte.hu>
Date: Tue, 24 Apr 2007 10:23:06 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Michael Gerdau <mgd@...hnosis.de>
Cc: linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Nick Piggin <npiggin@...e.de>,
Gene Heskett <gene.heskett@...il.com>,
Juliusz Chroboczek <jch@....jussieu.fr>,
Mike Galbraith <efault@....de>,
Peter Williams <pwil3058@...pond.net.au>,
ck list <ck@....kolivas.org>,
Thomas Gleixner <tglx@...utronix.de>,
William Lee Irwin III <wli@...omorphy.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Bill Davidsen <davidsen@....com>, Willy Tarreau <w@....eu>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [REPORT] cfs-v5 vs sd-0.46
* Michael Gerdau <mgd@...hnosis.de> wrote:
> > so to be totally 'fair' and get the same rescheduling 'granularity'
> > you should probably lower CFS's sched_granularity_ns to 2 msecs.
>
> I'll change default nice in cfs to -10.
>
> I'm also happy to adjust /proc/sys/kernel/sched_granularity_ns to
> 2msec. However checking /proc/sys/kernel/rr_interval reveals it is 16
> (msec) on my system.
ah, yeah - there due to the SMP rule in SD:
rr_interval *= 1 + ilog2(num_online_cpus());
and you have a 2-CPU system, so you get 8msec*2 == 16 msecs default
interval. I find this a neat solution and i have talked to Con about
this already and i'll adopt Con's idea in CFS too. Nevertheless, despite
the settings, SD seems to be rescheduling every 6-7 msecs, while CFS
reschedules only every 13 msecs.
Here i'm assuming that the vmstats are directly comparable: that your
number-crunchers behave the same during the full runtime - is that
correct? (If not then the vmstat result should be run at roughly the
same type of "stage" of the workload, on all the schedulers.)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists