[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c1dba90d-d464-9286-94e3-e399f0b71281@arm.com>
Date: Fri, 27 Apr 2018 10:30:39 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Viresh Kumar <viresh.kumar@...aro.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] sched/fair: Avoid calling sync_entity_load_avg()
unnecessarily
Hi Viresh,
On 04/26/2018 12:30 PM, Viresh Kumar wrote:
> Call sync_entity_load_avg() directly from find_idlest_cpu() instead of
> select_task_rq_fair(), as that's where we need to use task's utilization
> value. And call sync_entity_load_avg() only after making sure sched
> domain spans over one of the allowed CPUs for the task.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@...aro.org>
The patch looks correct to me but we want to have the waking task synced
against its previous rq also for EAS, i.e. for
find_energy_efficient_cpu() which will sit next to find_idlest_cpu().
https://marc.info/?l=linux-kernel&m=152302907327168&w=2
The comment on top of the if condition would have to be changed though.
I would suggest we leave the call to sync_entity_load_avg() in the slow
path of strf() so that we're not forced to call it in
find_energy_efficient_cpu().
> ---
> kernel/sched/fair.c | 16 +++++++---------
> 1 file changed, 7 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 84fc74ddbd4b..5b1b4f91f132 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6199,6 +6199,13 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
> if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
> return prev_cpu;
>
> + /*
> + * We need task's util for capacity_spare_wake, sync it up to prev_cpu's
> + * last_update_time.
> + */
> + if (!(sd_flag & SD_BALANCE_FORK))
> + sync_entity_load_avg(&p->se);
> +
> while (sd) {
> struct sched_group *group;
> struct sched_domain *tmp;
> @@ -6651,15 +6658,6 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
>
> if (unlikely(sd)) {
> /* Slow path */
> -
> - /*
> - * We're going to need the task's util for capacity_spare_wake
> - * in find_idlest_group. Sync it up to prev_cpu's
> - * last_update_time.
> - */
> - if (!(sd_flag & SD_BALANCE_FORK))
> - sync_entity_load_avg(&p->se);
> -
> new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> /* Fast path */
>
Powered by blists - more mailing lists