lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 08 Apr 2013 08:55:19 +0530
From:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To:	Alex Shi <alex.shi@...el.com>
CC:	mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
	akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
	pjt@...gle.com, namhyung@...nel.org, efault@....de,
	morten.rasmussen@....com, vincent.guittot@...aro.org,
	gregkh@...uxfoundation.org, viresh.kumar@...aro.org,
	linux-kernel@...r.kernel.org, len.brown@...el.com,
	rafael.j.wysocki@...el.com, jkosina@...e.cz,
	clark.williams@...il.com, tony.luck@...el.com,
	keescook@...omium.org, mgorman@...e.de, riel@...hat.com
Subject: Re: [patch v7 20/21] sched: don't do power balance on share cpu power
 domain

Hi Alex,

I am sorry I overlooked the changes you have made to the power
scheduling policies.Now you have just two : performance and powersave.

Hence you can ignore my below comments.But if you use group->capacity
instead of group->weight for threshold,like you did for balance policy
in your version5 of this patchset, dont you think the below patch can be
avoided? group->capacity being the threshold will automatically ensure
that you dont pack onto domains that share cpu power.

Regards
Preeti U Murthy

On 04/08/2013 08:47 AM, Preeti U Murthy wrote:
> Hi Alex,
> 
> On 04/04/2013 07:31 AM, Alex Shi wrote:
>> Packing tasks among such domain can't save power, just performance
>> losing. So no power balance on them.
> 
> As far as my understanding goes, powersave policy is the one that tries
> to pack tasks onto a SIBLING domain( domain where SD_SHARE_CPUPOWER is
> set).balance policy does not do that,meaning it does not pack on the
> domain that shares CPU power,but packs across all other domains.So the
> change you are making below results in nothing but the default behaviour
> of balance policy.
> 
> Correct me if I am wrong but my point is,looks to me,that the powersave
> policy is introduced in this patchset,and with the below patch its
> characteristic behaviour of packing onto domains sharing cpu power is
> removed,thus making it default to balance policy.Now there are two
> policies which behave the same way:balance and powersave.
> 
>>
>> Signed-off-by: Alex Shi <alex.shi@...el.com>
>> ---
>>  kernel/sched/fair.c | 7 ++++---
>>  1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 047a1b3..3a0284b 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3503,7 +3503,7 @@ static int get_cpu_for_power_policy(struct sched_domain *sd, int cpu,
>>
>>  	policy = get_sd_sched_balance_policy(sd, cpu, p, sds);
>>  	if (policy != SCHED_POLICY_PERFORMANCE && sds->group_leader) {
>> -		if (wakeup)
>> +		if (wakeup && !(sd->flags & SD_SHARE_CPUPOWER))
>>  			new_cpu = find_leader_cpu(sds->group_leader,
>>  							p, cpu, policy);
>>  		/* for fork balancing and a little busy task */
>> @@ -4410,8 +4410,9 @@ static unsigned long task_h_load(struct task_struct *p)
>>  static inline void init_sd_lb_power_stats(struct lb_env *env,
>>  						struct sd_lb_stats *sds)
>>  {
>> -	if (sched_balance_policy == SCHED_POLICY_PERFORMANCE ||
>> -				env->idle == CPU_NOT_IDLE) {
>> +	if (sched_balance_policy == SCHED_POLICY_PERFORMANCE
>> +			|| env->sd->flags & SD_SHARE_CPUPOWER
>> +			|| env->idle == CPU_NOT_IDLE) {
>>  		env->flags &= ~LBF_POWER_BAL;
>>  		env->flags |= LBF_PERF_BAL;
>>  		return;
>>
> 
> Regards
> Preeti U Murthy
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ