lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Apr 2007 10:16:58 +0200
From:	Michael Gerdau <mgd@...hnosis.de>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Nick Piggin <npiggin@...e.de>,
	Gene Heskett <gene.heskett@...il.com>,
	Juliusz Chroboczek <jch@....jussieu.fr>,
	Mike Galbraith <efault@....de>,
	Peter Williams <pwil3058@...pond.net.au>,
	ck list <ck@....kolivas.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	William Lee Irwin III <wli@...omorphy.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Bill Davidsen <davidsen@....com>, Willy Tarreau <w@....eu>,
	Arjan van de Ven <arjan@...radead.org>
Subject: Re: [REPORT] cfs-v5 vs sd-0.46

> > What I also don't understand is the difference in load average, sd 
> > constantly had higher values, the above figures are representative for 
> > the whole log. I don't know which is better though.
> 
> hm, it's hard from here to tell that. What load average does the vanilla 
> kernel report? I'd take that as a reference.

I will redo this test with sd-0.46, cfs-v5 and mainline later today.

> interesting - CFS has half the context-switch rate of SD. That is 
> probably because on your workload CFS defaults to longer 'timeslices' 
> than SD. You can influence the 'timeslice length' under SD via 
> /proc/sys/kernel/rr_interval (milliseconds units) and under CFS via 
> /proc/sys/kernel/sched_granularity_ns. On CFS the value is not 
> necessarily the timeslice length you will observe - for example in your 
> workload above the granularity is set to 5 msec, but your rescheduling 
> rate is 13 msecs. SD default to a rr_interval value of 8 msecs, which in 
> your workload produces a timeslice length of 6-7 msecs.
> 
> so to be totally 'fair' and get the same rescheduling 'granularity' you 
> should probably lower CFS's sched_granularity_ns to 2 msecs.

I'll change default nice in cfs to -10.

I'm also happy to adjust /proc/sys/kernel/sched_granularity_ns to 2msec.
However checking /proc/sys/kernel/rr_interval reveals it is 16 (msec)
on my system.

Anyway, I'll have to do some urgent other work and won't be able to
do lots of testing until tonight (but then I will).

Best,
Michael
-- 
 Technosis GmbH, Geschäftsführer: Michael Gerdau, Tobias Dittmar
 Sitz Hamburg; HRB 89145 Amtsgericht Hamburg
 Vote against SPAM - see http://www.politik-digital.de/spam/
 Michael Gerdau       email: mgd@...hnosis.de
 GPG-keys available on request or at public keyserver

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ