lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 04 Jun 2012 08:39:35 -0700
From:	Arjan van de Ven <arjan@...ux.intel.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC:	Vladimir Davydov <vdavydov@...allels.com>,
	Ingo Molnar <mingo@...e.hu>, Len Brown <lenb@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] cpuidle: menu: use nr_running instead of cpuload for
 calculating perf mult

On 6/4/2012 8:26 AM, Peter Zijlstra wrote:
> On Mon, 2012-06-04 at 08:14 -0700, Arjan van de Ven wrote:
>> On 6/4/2012 8:08 AM, Peter Zijlstra wrote:
>>> On Mon, 2012-06-04 at 06:48 -0700, Arjan van de Ven wrote:
>>>> it's not about busy, it's about performance sensitive.
>>>> it's not a super nice proxy, no argument, but it's one of the few long
>>>> term ones we have.
>>>>
>>> I'm still not seeing how it makes any sense at all. Is there an actual
>>> workload here this matters?
>>
>> yes there are, mostly server ones.
> 
> OK, so pick one that cares, and try creating a heuristic based on wakeup
> history or whatever.

sounds easy. is not.

> 
>> the problem isn't an individual idle, it's that the 100us-200us
>> latencies add up if you go in and out repeatedly, when the system is in
>> a situation where it is sensitive to performance (which is not an
>> instant thing, this is a "over the long run we're busy" thing)...
>> ... they become a real factor.
> 
> Right, but since you're inflating idle time, the work will be displaced
> and will complete later. This should result in your idle time est
> shrinking.

hmm I think you're missing the whole point.


> 
> I'm just not buying load actually matters or works, if there's lots of
> idle time load history should be low, if there's not a lot of idle time,
> you're busy (per definition) and again load isn't important.

if there is a lot of idle, load can be low or high; load is more than
just cpu usage. it includes waiting for resources and mutexes etc.

if load is low, you are idle, sure (in that direction it works). If load
is low, the heuristic that is used here will not hinder a deep C state
choice.

if there is not a lot of idle time, sure, load is high. but because idle
time tends to be bursty, we can still be idle for, say, a millisecond
every 10 milliseconds. In this scenario, the load average is used to
ensure that the 200 usecond cost of exiting idle is acceptable.


one other way of doing this would be tracking cumulative accrued latency
as a percentage of cpu busy time... but that's also a pretty
approximative measure.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ