lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170802132405.z5gvut7ecaygbhvy@hirez.programming.kicks-ass.net>
Date:   Wed, 2 Aug 2017 15:24:05 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Brendan Jackman <brendan.jackman@....com>
Cc:     Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
        Joel Fernandes <joelaf@...gle.com>,
        Andres Oportus <andresoportus@...gle.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Josef Bacik <josef@...icpanda.com>,
        Morten Rasmussen <morten.rasmussen@....com>
Subject: Re: [PATCH] sched/fair: Sync task util before slow-path wakeup

On Wed, Aug 02, 2017 at 02:10:02PM +0100, Brendan Jackman wrote:
> We use task_util in find_idlest_group via capacity_spare_wake. This
> task_util is updated in wake_cap. However wake_cap is not the only
> reason for ending up in find_idlest_group - we could have been sent
> there by wake_wide. So explicitly sync the task util with prev_cpu
> when we are about to head to find_idlest_group.
> 
> We could simply do this at the beginning of
> select_task_rq_fair (i.e. irrespective of whether we're heading to
> select_idle_sibling or find_idlest_group & co), but I didn't want to
> slow down the select_idle_sibling path more than necessary.
> 
> Don't do this during fork balancing, we won't need the task_util and
> we'd just clobber the last_update_time, which is supposed to be 0.

So I remember Morten explicitly not aging util of tasks on wakeup
because the old util was higher and better representative of what the
new util would be, or something along those lines.

Morten?

> Signed-off-by: Brendan Jackman <brendan.jackman@....com>
> Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> Cc: Vincent Guittot <vincent.guittot@...aro.org>
> Cc: Josef Bacik <josef@...icpanda.com>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Morten Rasmussen <morten.rasmussen@....com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> ---
>  kernel/sched/fair.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c95880e216f6..62869ff252b4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5913,6 +5913,14 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
>  			new_cpu = cpu;
>  	}
>  
> +	if (sd && !(sd_flag & SD_BALANCE_FORK))
> +		/*
> +		 * We're going to need the task's util for capacity_spare_wake
> +		 * in select_idlest_group. Sync it up to prev_cpu's
> +		 * last_update_time.
> +		 */
> +		sync_entity_load_avg(&p->se);
> +

That has missing {}


>  	if (!sd) {
>   pick_cpu:

And if this patch lives, can you please fix up that broken label indent?

>  		if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ