lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 18 Mar 2014 10:17:40 +0800
From:	Alex Shi <alex.shi@...aro.org>
To:	mingo@...hat.com, peterz@...radead.org, morten.rasmussen@....com
CC:	vincent.guittot@...aro.org, daniel.lezcano@...aro.org,
	fweisbec@...il.com, linux@....linux.org.uk, tony.luck@...el.com,
	fenghua.yu@...el.com, james.hogan@...tec.com, alex.shi@...aro.org,
	jason.low2@...com, viresh.kumar@...aro.org, hanjun.guo@...aro.org,
	linux-kernel@...r.kernel.org, tglx@...utronix.de,
	akpm@...ux-foundation.org, arjan@...ux.intel.com, pjt@...gle.com,
	fengguang.wu@...el.com, linaro-kernel@...ts.linaro.org,
	wangyun@...ux.vnet.ibm.com, mgorman@...e.de
Subject: Re: [ PATCH 0/8] sched: remove cpu_load array

On 03/17/2014 11:24 AM, Alex Shi wrote:
> On 03/13/2014 01:57 PM, Alex Shi wrote:
>> In the cpu_load decay usage, we mixed the long term, short term load with
>> balance bias, randomly pick a big/small value from them according to balance 
>> destination or source. This mix is wrong, the balance bias should be based
>> on task moving cost between cpu groups, not on random history or instant load.
>> History load maybe diverage a lot from real load, that lead to incorrect bias.
>>
>> In fact, the cpu_load decays can be replaced by the sched_avg decay, that 
>> also decays load on time. The balance bias part can fully use fixed bias --
>> imbalance_pct, which is already used in newly idle, wake, forkexec balancing
>> and numa balancing scenarios.
>>
>> Currently the only working idx is busy_idx and idle_idx.
>> As to busy_idx:
>> We mix history load decay and bias together. The ridiculous thing is, when 
>> all cpu load are continuous stable, long/short term load is same. then we 
>> lose the bias meaning, so any minimum imbalance may cause unnecessary task
>> moving. To prevent this funny thing happen, we have to reuse the 
>> imbalance_pct again in find_busiest_group().  But that clearly causes over
>> bias in normal time. If there are some burst load in system, it is more worse.
>>
> 
> Any comments?

IMHO, task moving resistance, bias, is only related with task migration
cost. no relationship with history load.

Another issue of history load mix with bias is that we want to care the
history load impact, but it is often ignored when instant load is fit
bias required more -- bigger in destination or smaller in source.

Any comments on this? :)

> 
>> As to idle_idx:
>> Though I have some concern of usage correction, 
>> https://lkml.org/lkml/2014/3/12/247, but since we are working on cpu
>> idle migration into scheduler. The problem will be reconsidered. We don't
>> need to care it now.
>>
>> This patch removed the cpu_load idx decay, since it can be replaced by
>> sched_avg feature. and left the imbalance_pct bias untouched, since only 
>> idle_idx missed it, but it is fine. and will be reconsidered soon.
>>
>>
>> V5,
>> 1, remove unify bias patch and biased_load function. Thanks for PeterZ's 
>> comments!
>> 2, remove get_sd_load_idx() in the 1st patch as SrikarD's suggestion.
>> 3, remove LB_BIAS feature, it is not needed now.
> 
> 


-- 
Thanks
    Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ