lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070424075319.GA30909@elte.hu>
Date:	Tue, 24 Apr 2007 09:53:20 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Michael Gerdau <mgd@...hnosis.de>
Cc:	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Nick Piggin <npiggin@...e.de>,
	Gene Heskett <gene.heskett@...il.com>,
	Juliusz Chroboczek <jch@....jussieu.fr>,
	Mike Galbraith <efault@....de>,
	Peter Williams <pwil3058@...pond.net.au>,
	ck list <ck@....kolivas.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	William Lee Irwin III <wli@...omorphy.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Bill Davidsen <davidsen@....com>, Willy Tarreau <w@....eu>,
	Arjan van de Ven <arjan@...radead.org>
Subject: Re: [REPORT] cfs-v5 vs sd-0.46


* Michael Gerdau <mgd@...hnosis.de> wrote:

> I'm running three single threaded perl scripts that do double 
> precision floating point math with little i/o after initially loading 
> the data.

thanks for the testing!

> What I also don't understand is the difference in load average, sd 
> constantly had higher values, the above figures are representative for 
> the whole log. I don't know which is better though.

hm, it's hard from here to tell that. What load average does the vanilla 
kernel report? I'd take that as a reference.

> Here are excerpts from a concurrently run vmstat 3 200:
> 
> sd-0.46
> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
>  5  0      0 1702928  63664 827876    0    0     0    67  458 1350 100  0  0  0
>  3  0      0 1702928  63684 827876    0    0     0    89  468 1362 100  0  0  0
>  5  0      0 1702680  63696 827876    0    0     0   132  461 1598 99  1  0  0
>  8  0      0 1702680  63712 827892    0    0     0    80  465 1180 99  1  0  0

> cfs-v5
> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
>  6  0      0 2157728  31816 545236    0    0     0   103  543  748 100  0  0  0
>  4  0      0 2157780  31828 545256    0    0     0    63  435  752 100  0  0  0
>  4  0      0 2157928  31852 545256    0    0     0   105  424  770 100  0  0  0
>  4  0      0 2157928  31868 545268    0    0     0   261  457  763 100  0  0  0

interesting - CFS has half the context-switch rate of SD. That is 
probably because on your workload CFS defaults to longer 'timeslices' 
than SD. You can influence the 'timeslice length' under SD via 
/proc/sys/kernel/rr_interval (milliseconds units) and under CFS via 
/proc/sys/kernel/sched_granularity_ns. On CFS the value is not 
necessarily the timeslice length you will observe - for example in your 
workload above the granularity is set to 5 msec, but your rescheduling 
rate is 13 msecs. SD default to a rr_interval value of 8 msecs, which in 
your workload produces a timeslice length of 6-7 msecs.

so to be totally 'fair' and get the same rescheduling 'granularity' you 
should probably lower CFS's sched_granularity_ns to 2 msecs.

> Last not least I'd like to add that at least on my system having X 
> niced to -19 does result in kind of "erratic" (for lack of a better 
> word) desktop behavior. I'll will reevaluate this with -v6 but for now 
> IMO nicing X to -19 is a regression at least on my machine despite the 
> claim that cfs doesn't suffer from it.

indeed with -19 the rescheduling limit is so high under CFS that it does 
not throttle X's scheduling rate enough and so it will make CFS behave 
as badly as other schedulers.

I retested this with -10 and it should work better with that. In -v6 i 
changed the default to -10 too.

> PS: Only learning how to test these things I'm happy to get pointed 
> out the shortcomings of what I tested above. Of course suggestions for 
> improvements are welcome.

your report was perfectly fine and useful. "no visible regressions" is 
valuable feedback too. [ In fact, such type of feedback is the one i 
find the easiest to resolve ;-) ]

Since you are running number-crunchers you might be able to give 
performacne feedback too: do you have any reliable 'performance metric' 
available for your number cruncher jobs (ops per minute, runtime, etc.) 
so that it would be possible to compare number-crunching performance of 
mainline to SD and to CFS as well? If that value is easy to get and 
reliable/stable enough to be meaningful. (And it would be nice to also 
establish some ballpark figure about how much noise there is in any 
performance metric, so that we can see whether any differences between 
schedulers are systematic or not.)

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ