lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <200704301005.33884.mgd@technosis.de>
Date:	Mon, 30 Apr 2007 10:05:07 +0200
From:	Michael Gerdau <mgd@...hnosis.de>
To:	linux-kernel@...r.kernel.org
Cc:	Ingo Molnar <mingo@...e.hu>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Nick Piggin <npiggin@...e.de>,
	Gene Heskett <gene.heskett@...il.com>,
	Juliusz Chroboczek <jch@....jussieu.fr>,
	Mike Galbraith <efault@....de>,
	Peter Williams <pwil3058@...pond.net.au>,
	ck list <ck@....kolivas.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	William Lee Irwin III <wli@...omorphy.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Bill Davidsen <davidsen@....com>, Willy Tarreau <w@....eu>,
	Arjan van de Ven <arjan@...radead.org>
Subject: [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

i list,

meanwhile I've redone my numbercrunching tests with the following kernels:
    2.6.21.1 (mainline)
    2.6.21-sd046
    2.6.21-cfs-v6
running on a dualcore x86_64.
[I will run the same test with 2.6.21.1-cfs-v7 over the next days,
likely tonight]

The tests consist of 3 tasks (named LTMM, LTMB and LTBM). The only
I/O they do is during init and for logging the results, the rest
is just floating point math.

There are 3 scenarios:
    j1 - all 3 tasks run sequentially
	 /proc/sys/kernel/sched_granularity_ns=4000000
	 /proc/sys/kernel/rr_interval=16
    j3 - all 3 tasks run in parallel
	 /proc/sys/kernel/sched_granularity_ns=4000000
	 /proc/sys/kernel/rr_interval=16
    j3big - all 3 tasks run in parallel with timeslice extended
            by 2 magnitudes (not run for mainline)
            /proc/sys/kernel/sched_granularity_ns=400000000
            /proc/sys/kernel/rr_interval=400

All 3 tasks are run while the system does nothing else except for
the "normal" (KDE) daemons. The system had not been used for
interactive work during the tests.

I'm giving user time as provided by the "time" cmd followed by wallclock time
(all values in seconds).

                LTMM
                j1              j3              j3big
2.6.21-cfs-v6    5655.07/ 5682   5437.84/ 5531   5434.04/ 8072
2.6.21-sd-046    5556.44/ 5583   5446.86/ 8037   5407.50/ 8274
2.6.21.1         5417.62/ 5439   5422.37/ 7132   na     /na

                LTMB
                j1              j3              j3big
2.6.21-cfs-v6    7729.81/ 7755   7470.10/10244   7449.16/10186
2.6.21-sd-046    7611.00/ 7626   7573.16/10109   7458.10/10316
2.6.21.1         7438.31/ 7461   7620.72/11087   na     /na

                LTBM
                j1              j3              j3big
2.6.21-cfs-v6    7720.70/ 7746   7567.09/10362   7464.17/10335
2.6.21-sd-046    7431.06/ 7452   7539.83/10600   7474.20/10222
2.6.21.1         7452.80/ 7481   7484.19/ 9570   na     /na

                LTMM+LTMB+LTBM
                j1              j3              j3big
2.6.21-cfs-v6	21105.58/21183  20475.03/26137  20347.37/28593
2.6.21-sd-046	20598.50/20661  20559.85/28746  20339.80/28812
2.6.21.1	20308.73/20381  20527.28/27789  na      /na
	

User time apparently is subject to some variance. I'm particularly surprised
by the wallclock time of scenario j1 and j3 for case LTMM with 2.6.21-cfs-v6.
I'm not sure what to make of this, i.e. whether I had happening something
else on my machine during j1 of LTMM -- that's always been the first test
I ran and it might be that there were still some other jobs running after
the initial boot.

Assuming scenario j1 does constitute the "true" time each task requires and
also assuming each scheduler makes maximum use of the available CPUs (the
tests involve very little I/O) one could compute the expected wallclock time.
However since I suspect the j1 figures of LTMM to be somewhat "dirty" I'll
refrain from it.

However from these figures it seems as if sd does provide for the fairest
(as in equal share for all) scheduling among the 3 schedulers tested.

Best,
Michael
-- 
 Technosis GmbH, Geschäftsführer: Michael Gerdau, Tobias Dittmar
 Sitz Hamburg; HRB 89145 Amtsgericht Hamburg
 Vote against SPAM - see http://www.politik-digital.de/spam/
 Michael Gerdau       email: mgd@...hnosis.de
 GPG-keys available on request or at public keyserver

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ