lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <004701cd4929$200d4600$6027d200$@net>
Date:	Tue, 12 Jun 2012 22:55:14 -0700
From:	"Doug Smythies" <dsmythies@...us.net>
To:	"'Peter Zijlstra'" <peterz@...radead.org>,
	"'Charles Wang'" <muming.wq@...il.com>
Cc:	<linux-kernel@...r.kernel.org>, "'Ingo Molnar'" <mingo@...hat.com>,
	"'Charles Wang'" <muming.wq@...bao.com>, "'Tao Ma'" <tm@....ma>,
	'含黛' <handai.szj@...bao.com>,
	"'Doug Smythies'" <dsmythies@...us.net>
Subject: RE: [PATCH] sched: Folding nohz load accounting more accurate

>  On 2012.06.12 02:56 - 0800 (I think), Peter Zijlstra wrote:

>Also added Doug to CC, hopefully we now have everybody who pokes at this
>stuff.

Thanks.

On my computer, and from a different thread from yesterday, I let
the proposed "wang" patch multiple processes test continue for
another 24 hours. The png file showing the results is attached, also
available at [1].

Conclusion: The proposed "wang" patch is worse for the lower load
conditions, giving higher reported load average errors for the same
conditions. The proposed "wang" patch tends towards a load equal to
the number of processes, independent of the actual load of those
processes.

Interestingly, with the "wang" patch I was able to remove the 10
tick grace period without bad side effects (very minimally tested).

@ Charles or Tao: If I could ask: What is your expected load for your 16
processes case? Because you used to get a reported load average of
< 1, we know that the processes enter and exit idle (sleep) at a high
frequency (as that was only possible way for the older under reporting
issue, at least as far as I know). You said it now reports a load
average of 8 to 10, but that is too low. How many CPU's do you have?
I have been unable to re-create your situation on my test computer
(an i7 CPU).
When I run 16 processes, where each process would use 0.95 of a cpu,
if the system did not become resource limited, I get a reported load
average of about 15 to 16. Kernel = 3.5 RC2. Process sleep frequency
was about 80 Hertz each.

[1]
http://www.smythies.com/~doug/network/load_average/load_processes_wang.html

Doug Smythies


Download attachment "load_processes_wang.png" of type "image/png" (38927 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ