lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51650185.9060905@linux.vnet.ibm.com>
Date:	Wed, 10 Apr 2013 14:07:01 +0800
From:	Michael Wang <wangyun@...ux.vnet.ibm.com>
To:	Vincent Guittot <vincent.guittot@...aro.org>
CC:	Alex Shi <alex.shi@...el.com>,
	"mingo@...hat.com" <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Arjan van de Ven <arjan@...ux.intel.com>,
	Borislav Petkov <bp@...en8.de>, Paul Turner <pjt@...gle.com>,
	Namhyung Kim <namhyung@...nel.org>,
	Mike Galbraith <efault@....de>,
	Morten Rasmussen <morten.rasmussen@....com>,
	gregkh@...uxfoundation.org,
	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Len Brown <len.brown@...el.com>, rafael.j.wysocki@...el.com,
	jkosina@...e.cz, clark.williams@...il.com,
	"tony.luck@...el.com" <tony.luck@...el.com>, keescook@...omium.org,
	mgorman@...e.de, riel@...hat.com
Subject: Re: [patch v3 6/8] sched: consider runnable load average in move_tasks

On 04/09/2013 03:08 PM, Vincent Guittot wrote:
> On 2 April 2013 05:23, Alex Shi <alex.shi@...el.com> wrote:
>> Except using runnable load average in background, move_tasks is also
>> the key functions in load balance. We need consider the runnable load
>> average in it in order to the apple to apple load comparison.
>>
>> Signed-off-by: Alex Shi <alex.shi@...el.com>
>> ---
>>  kernel/sched/fair.c | 11 ++++++++++-
>>  1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 1f9026e..bf4e0d4 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3966,6 +3966,15 @@ static unsigned long task_h_load(struct task_struct *p);
>>
>>  static const unsigned int sched_nr_migrate_break = 32;
>>
>> +static unsigned long task_h_load_avg(struct task_struct *p)
>> +{
>> +       u32 period = p->se.avg.runnable_avg_period;
>> +       if (!period)
>> +               return 0;
>> +
>> +       return task_h_load(p) * p->se.avg.runnable_avg_sum / period;
> 
> How do you ensure that runnable_avg_period and runnable_avg_sum are
> coherent ? an update of the statistic can occur in the middle of your
> sequence.

Hi, Vincent

Don't we have the 'rq->lock' to protect it?

move_tasks() was invoked with double locked, for all the se on src and
dst rq, no update should happen, isn't it?

Regards,
Michael Wang

> 
> Vincent
> 
>> +}
>> +
>>  /*
>>   * move_tasks tries to move up to imbalance weighted load from busiest to
>>   * this_rq, as part of a balancing operation within domain "sd".
>> @@ -4001,7 +4010,7 @@ static int move_tasks(struct lb_env *env)
>>                 if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
>>                         goto next;
>>
>> -               load = task_h_load(p);
>> +               load = task_h_load_avg(p);
>>
>>                 if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
>>                         goto next;
>> --
>> 1.7.12
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ