lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 5 May 2017 18:51:25 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Tejun Heo <tj@...nel.org>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
        Chris Mason <clm@...com>, kernel-team@...com
Subject: Re: [PATCH 3/3] sched/fair: Propagate runnable_load_avg independently
 from load_avg

On 4 May 2017 at 22:30, Tejun Heo <tj@...nel.org> wrote:

[snip]

>  /* Take into account change of load of a child task group */
>  static inline void
>  update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se)
> @@ -3120,17 +3144,6 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq
>         /* Update parent cfs_rq load */
>         add_positive(&cfs_rq->avg.load_avg, delta);
>         cfs_rq->avg.load_sum = cfs_rq->avg.load_avg * LOAD_AVG_MAX;
> -
> -       /*
> -        * If the sched_entity is already enqueued, we also have to update the
> -        * runnable load avg.
> -        */
> -       if (se->on_rq) {
> -               /* Update parent cfs_rq runnable_load_avg */
> -               add_positive(&cfs_rq->avg.runnable_load_avg, delta);
> -               cfs_rq->avg.runnable_load_sum =
> -                       cfs_rq->avg.runnable_load_avg * LOAD_AVG_MAX;
> -       }
>  }
>
>  static inline void set_tg_cfs_propagate(struct cfs_rq *cfs_rq)
> @@ -3152,16 +3165,16 @@ static inline int test_and_clear_tg_cfs_
>  /* Update task and its cfs_rq load average */
>  static inline int propagate_entity_load_avg(struct sched_entity *se)
>  {
> -       struct cfs_rq *cfs_rq;
> +       struct cfs_rq *cfs_rq = cfs_rq_of(se);
>
>         if (entity_is_task(se))
>                 return 0;
>
> +       update_tg_cfs_runnable(cfs_rq, se);

I wonder if update_tg_cfs_runnable() should really be called in
propagate_entity_load_avg().
propagate_entity_load_avg() is called on every update_load_avg because
of entity  can be asynchronously detached so we must catch this
asynchronous events each time we update the load
But  enqueue/dequeue of entity of fully synchronous and we know when
runnable_load_avg is updated and should be propagated. So instead of
calling update_tg_cfs_runnable() at every update_load_avg(), we can
only call it in the enque_/dequeue path and save lot of useless call

> +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ