[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160602092358.GC9340@e106622-lin>
Date: Thu, 2 Jun 2016 10:23:58 +0100
From: Juri Lelli <juri.lelli@....com>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org,
Vincent Guittot <vincent.guittot@...aro.org>,
Ben Segall <bsegall@...gle.com>,
Morten Rasmussen <morten.rasmussen@....com>,
Yuyang Du <yuyang.du@...el.com>
Subject: Re: [RFC PATCH 1/3] sched/fair: Aggregate task utilization only on
root cfs_rq
Hi,
minor comment below.
On 01/06/16 20:39, Dietmar Eggemann wrote:
> cpu utilization (cpu_util()) is defined as the cpu (original) capacity
> capped cfs_rq->avg->util_avg signal of the root cfs_rq.
>
> With the current pelt version, the utilization of a task [en|de]queued
> on/from a cfs_rq, representing a task group other than the root task group
> on a cpu, is not immediately propagated down to the root cfs_rq.
>
> This makes decisions based on cpu_util() for scheduling or cpu frequency
> settings less accurate in case tasks are running in task groups.
>
> This patch aggregates the task utilization only on the root cfs_rq,
> essentially avoiding maintaining utilization for a se/cfs_rq representing
> task groups other than the root task group (!entity_is_task(se) and
> &rq_of(cfs_rq)->cfs != cfs_rq).
>
> The additional if/else condition to set @update_util in
> __update_load_avg() is replaced in 'sched/fair: Change @running of
> __update_load_avg() to @update_util' by providing the information whether
> utilization has to be maintained via an argument to this function.
>
> The additional requirements for the alignment of the last_update_time of a
> se and the root cfs_rq are handled by the patch 'sched/fair: Sync se with
> root cfs_rq'.
>
> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
> ---
> kernel/sched/fair.c | 48 ++++++++++++++++++++++++++++++++++++------------
> 1 file changed, 36 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 218f8e83db73..212becd3708f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2705,6 +2705,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
> u32 contrib;
> unsigned int delta_w, scaled_delta_w, decayed = 0;
> unsigned long scale_freq, scale_cpu;
> + int update_util = 0;
>
> delta = now - sa->last_update_time;
> /*
> @@ -2725,6 +2726,12 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
> return 0;
> sa->last_update_time = now;
>
> + if (cfs_rq) {
> + if (&rq_of(cfs_rq)->cfs == cfs_rq)
Maybe we can wrap this sort of checks in a static inline improving
readability?
Best,
- Juri
Powered by blists - more mailing lists