lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Dec 2012 10:11:03 +0530
From:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To:	Alex Shi <alex.shi@...el.com>
CC:	rob@...dley.net, mingo@...hat.com, peterz@...radead.org,
	gregkh@...uxfoundation.org, andre.przywara@....com, rjw@...k.pl,
	paul.gortmaker@...driver.com, akpm@...ux-foundation.org,
	paulmck@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
	pjt@...gle.com, vincent.guittot@...aro.org
Subject: Re: [PATCH 08/18] sched: consider runnable load average in move_tasks

Hi Alex,
On 12/10/2012 01:52 PM, Alex Shi wrote:
> Except using runnable load average in background, move_tasks is also
> the key functions in load balance. We need consider the runnable load
> average in it in order to the apple to apple load comparison.
> 
> Signed-off-by: Alex Shi <alex.shi@...el.com>
> ---
>  kernel/sched/fair.c |   11 ++++++++++-
>  1 files changed, 10 insertions(+), 1 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6d893a6..bbb069c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3741,6 +3741,15 @@ static unsigned long task_h_load(struct task_struct *p);
>  
>  static const unsigned int sched_nr_migrate_break = 32;
>  
> +static unsigned long task_h_load_avg(struct task_struct *p)
> +{
> +	u32 period = p->se.avg.runnable_avg_period;
> +	if (!period)
> +		return 0;
> +
> +	return task_h_load(p) * p->se.avg.runnable_avg_sum / period;
                        ^^^^^^^^^^^^
This might result in an overflow,considering you are multiplying two 32
bit integers.Below is how this is handled in
__update_task_entity_contrib in kernel/sched/fair.c

u32 contrib;
/* avoid overflowing a 32-bit type w/ SCHED_LOAD_SCALE */
contrib = se->avg.runnable_avg_sum * scale_load_down(se->load.weight);
contrib /= (se->avg.runnable_avg_period + 1);
se->avg.load_avg_contrib = scale_load(contrib);

Also why can't p->se.load_avg_contrib be used directly? as a return
value for task_h_load_avg? since this is already updated in
update_task_entity_contrib and update_group_entity_contrib.
> +}
> +
>  /*
>   * move_tasks tries to move up to imbalance weighted load from busiest to
>   * this_rq, as part of a balancing operation within domain "sd".
> @@ -3776,7 +3785,7 @@ static int move_tasks(struct lb_env *env)
>  		if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
>  			goto next;
>  
> -		load = task_h_load(p);
> +		load = task_h_load_avg(p);
>  
>  		if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
>  			goto next;
> 

Regards
Preeti U Murthy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ