lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Dec 2012 13:52:48 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>, pjt@...gle.com
CC:	rob@...dley.net, mingo@...hat.com, peterz@...radead.org,
	gregkh@...uxfoundation.org, andre.przywara@....com, rjw@...k.pl,
	paul.gortmaker@...driver.com, akpm@...ux-foundation.org,
	paulmck@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
	vincent.guittot@...aro.org
Subject: Re: [PATCH 07/18] sched: compute runnable load avg in cpu_load and
 cpu_avg_load_per_task

On 12/12/2012 11:57 AM, Preeti U Murthy wrote:
> Hi Alex,
> On 12/10/2012 01:52 PM, Alex Shi wrote:
>> They are the base values in load balance, update them with rq runnable
>> load average, then the load balance will consider runnable load avg
>> naturally.
>>
>> Signed-off-by: Alex Shi <alex.shi@...el.com>
>> ---
>>  kernel/sched/core.c |    4 ++--
>>  kernel/sched/fair.c |    4 ++--
>>  2 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 96fa5f1..0ecb907 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2487,7 +2487,7 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load,
>>  void update_idle_cpu_load(struct rq *this_rq)
>>  {
>>  	unsigned long curr_jiffies = ACCESS_ONCE(jiffies);
>> -	unsigned long load = this_rq->load.weight;
>> +	unsigned long load = (unsigned long)this_rq->cfs.runnable_load_avg;
>>  	unsigned long pending_updates;
>>  
>>  	/*
>> @@ -2537,7 +2537,7 @@ static void update_cpu_load_active(struct rq *this_rq)
>>  	 * See the mess around update_idle_cpu_load() / update_cpu_load_nohz().
>>  	 */
>>  	this_rq->last_load_update_tick = jiffies;
>> -	__update_cpu_load(this_rq, this_rq->load.weight, 1);
>> +	__update_cpu_load(this_rq, this_rq->cfs.runnable_load_avg, 1);
>>  
>>  	calc_load_account_active(this_rq);
>>  }
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 61c8d24..6d893a6 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -2680,7 +2680,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>>  /* Used instead of source_load when we know the type == 0 */
>>  static unsigned long weighted_cpuload(const int cpu)
>>  {
>> -	return cpu_rq(cpu)->load.weight;
>> +	return (unsigned long)cpu_rq(cpu)->cfs.runnable_load_avg;
> 
> I was wondering why you have typecasted the cfs.runnable_load_avg to
> unsigned long.Have you looked into why it was declared as u64 in the
> first place?

PJT:
Could we changed the cfs.runnable_load_avg to unsigned long?  since it's
a unsigned long value multiple a value less then 1.

> 
>>  }
>>  
>>  /*
>> @@ -2727,7 +2727,7 @@ static unsigned long cpu_avg_load_per_task(int cpu)
>>  	unsigned long nr_running = ACCESS_ONCE(rq->nr_running);
>>  
>>  	if (nr_running)
>> -		return rq->load.weight / nr_running;
>> +		return rq->cfs.runnable_load_avg / nr_running;
> 
> rq->cfs.runnable_load_avg is u64 type.you will need to typecast it here
> also right? how does this division work? because the return type is
> unsigned long.

Yes, a clear cast is better.
>>  
>>  	return 0;
>>  }
>>
> 
> Regards
> Preeti U Murthy
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ