lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 Apr 2007 10:47:25 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Satoru Takeuchi <takeuchi_satoru@...fujitsu.com>
Cc:	surya.prabhakar@...ro.com, kernel@...ivas.org,
	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	akpm@...ux-foundation.org, npiggin@...e.de, efault@....de,
	arjan@...radead.org, tglx@...utronix.de, wli@...omorphy.com
Subject: Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40


* Satoru Takeuchi <takeuchi_satoru@...fujitsu.com> wrote:

> > btw., other schedulers might work better with some more test-time: 
> > i'd suggest to use 60 seconds (./massive_intr 10 60) [or maybe more, 
> > using more threads] to see long-term fairness effects.
> 
> I tested CFS with massive_intr. I did long term, many CPUs, and many 
> processes cases.
> 
> Test environment
> ================
> 
>  - kernel:         2.6.21-rc6-CFS
>  - run time:       300 secs
>  - # of CPU:       1 or 4
>  - # of processes: 200 or 800
> 
> Result
> ======
> 
>   +---------+-----------+-------+------+------+--------+
>   |   # of  |   # of    | avg   | max  | min  |  stdev |
>   |   CPUs  | processes | (*1)  | (*2) | (*3) |  (*4)  |
>   +---------+-----------+-------+------+------+--------+
>   | 1(i386) |           | 117.9 |  123 |  115 |    1.2 |
>   +---------|       200 +-------+------+------+--------+
>   |         |           | 750.2 |  767 |  735 |   10.6 |
>   | 4(ia64) +-----------+-------+------+------+--------+
>   |         |   800(*5) | 187.3 |  189 |  186 |    0.8 |
>   +---------+-----------+-------+------+------+--------+
> 
>   *1) average number of loops among all processes
>   *2) maximum number of loops among all processes
>   *3) minimum number of loops among all processes
>   *4) standard deviation
>   *5) Its # of processes per CPU is equal to first test case.
> 
> Pretty good! CFS seems to be fair in any situation.

thanks for testing this! Indeed the min-max values and standard 
deviation look all pretty healthy. (They in fact seem to be better than 
the other patch of mine against upstream that you tested, correct?)

[ And there's also another nice little detail in your feedback: CFS
  actually builds, boots and works fine on ia64 too ;-) ]

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ