lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 17 Jun 2016 15:51:21 +0200
From:	Vincent Guittot <vincent.guittot@...aro.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Yuyang Du <yuyang.du@...el.com>, Ingo Molnar <mingo@...nel.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	Benjamin Segall <bsegall@...gle.com>,
	Paul Turner <pjt@...gle.com>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Matt Fleming <matt@...eblueprint.co.uk>
Subject: Re: [PATCH 2/4] sched/fair: Fix PELT integrity for new groups

On 17 June 2016 at 14:01, Peter Zijlstra <peterz@...radead.org> wrote:
> Vincent reported that when a new task is moved into a new cgroup it

task doesn't have to be new only the cgroup

> gets attached twice to the load tracking.
>
>   sched_move_task()
>     task_move_group_fair()
>       detach_task_cfs_rq()
>       set_task_rq()
>       attach_task_cfs_rq()
>         attach_entity_load_avg()
>           se->avg.last_load_update = cfs_rq->avg.last_load_update // == 0
>
>   enqueue_entity()
>     enqueue_entity_load_avg()
>       update_cfs_rq_load_avg()
>         now = clock()
>         __update_load_avg(&cfs_rq->avg)
>           cfs_rq->avg.last_load_update = now
>           // ages load/util for: now - 0, load/util -> 0
>       if (migrated)
>         attach_entity_load_avg()
>           se->avg.last_load_update = cfs_rq->avg.last_load_update; // now != 0
>
> The problem is that we don't update cfs_rq load_avg before all
> entity attach/detach operations. Only enqueue and migrate_task do
> this.
>
> By fixing this, the above will not happen, because the
> sched_move_task() attach will have updated cfs_rq's last_load_update
> time before attach, and in turn the attach will have set the entity's
> last_load_update stamp.
>
> Note that there is a further problem with sched_move_task() calling
> detach on a task that hasn't yet been attached; this will be taken
> care of in a subsequent patch.

This patch fixes the double attach
Tested-by:  Vincent Guittot <vincent.guittot@...aro.org>

>
> Cc: Yuyang Du <yuyang.du@...el.com>
> Reported-by: Vincent Guittot <vincent.guittot@...aro.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  kernel/sched/fair.c |    4 ++++
>  1 file changed, 4 insertions(+)
>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8366,6 +8366,7 @@ static void detach_task_cfs_rq(struct ta
>  {
>         struct sched_entity *se = &p->se;
>         struct cfs_rq *cfs_rq = cfs_rq_of(se);
> +       u64 now = cfs_rq_clock_task(cfs_rq);
>
>         if (!vruntime_normalized(p)) {
>                 /*
> @@ -8377,6 +8378,7 @@ static void detach_task_cfs_rq(struct ta
>         }
>
>         /* Catch up with the cfs_rq and remove our load when we leave */
> +       update_cfs_rq_load_avg(now, cfs_rq, false);
>         detach_entity_load_avg(cfs_rq, se);
>  }
>
> @@ -8384,6 +8386,7 @@ static void attach_task_cfs_rq(struct ta
>  {
>         struct sched_entity *se = &p->se;
>         struct cfs_rq *cfs_rq = cfs_rq_of(se);
> +       u64 now = cfs_rq_clock_task(cfs_rq);
>
>  #ifdef CONFIG_FAIR_GROUP_SCHED
>         /*
> @@ -8394,6 +8397,7 @@ static void attach_task_cfs_rq(struct ta
>  #endif
>
>         /* Synchronize task with its cfs_rq */
> +       update_cfs_rq_load_avg(now, cfs_rq, false);
>         attach_entity_load_avg(cfs_rq, se);
>
>         if (!vruntime_normalized(p))
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ