[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEU1=P=EWa78hM0Wha=38qC7AqqVzhWahmhHNTrAco=nG=Ou9w@mail.gmail.com>
Date: Tue, 29 Aug 2017 10:15:58 +0530
From: Pavan Kondeti <pkondeti@...eaurora.org>
To: Patrick Bellasi <patrick.bellasi@....com>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-pm@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Paul Turner <pjt@...gle.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
John Stultz <john.stultz@...aro.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Juri Lelli <juri.lelli@....com>,
Tim Murray <timmurray@...gle.com>,
Todd Kjos <tkjos@...roid.com>,
Andres Oportus <andresoportus@...gle.com>,
Joel Fernandes <joelaf@...gle.com>,
Viresh Kumar <viresh.kumar@...aro.org>
Subject: Re: [RFC 2/3] sched/fair: use util_est in LB
On Fri, Aug 25, 2017 at 3:50 PM, Patrick Bellasi
<patrick.bellasi@....com> wrote:
> When the scheduler looks at the CPU utlization, the current PELT value
> for a CPU is returned straight away. In certain scenarios this can have
> undesired side effects on task placement.
>
<snip>
> +/**
> + * cpu_util_est: estimated utilization for the specified CPU
> + * @cpu: the CPU to get the estimated utilization for
> + *
> + * The estimated utilization of a CPU is defined to be the maximum between its
> + * PELT's utilization and the sum of the estimated utilization of the tasks
> + * currently RUNNABLE on that CPU.
> + *
> + * This allows to properly represent the expected utilization of a CPU which
> + * has just got a big task running since a long sleep period. At the same time
> + * however it preserves the benefits of the "blocked load" in describing the
> + * potential for other tasks waking up on the same CPU.
> + *
> + * Return: the estimated utlization for the specified CPU
> + */
> +static inline unsigned long cpu_util_est(int cpu)
> +{
> + struct sched_avg *sa = &cpu_rq(cpu)->cfs.avg;
> + unsigned long util = cpu_util(cpu);
> +
> + if (!sched_feat(UTIL_EST))
> + return util;
> +
> + return max(util, util_est(sa, UTIL_EST_LAST));
> +}
> +
> static inline int task_util(struct task_struct *p)
> {
> return p->se.avg.util_avg;
> @@ -6007,11 +6033,19 @@ static int cpu_util_wake(int cpu, struct task_struct *p)
>
> /* Task has no contribution or is new */
> if (cpu != task_cpu(p) || !p->se.avg.last_update_time)
> - return cpu_util(cpu);
> + return cpu_util_est(cpu);
>
> capacity = capacity_orig_of(cpu);
> util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util(p), 0);
>
> + /*
> + * Estimated utilization tracks only tasks already enqueued, but still
> + * sometimes can return a bigger value than PELT, for example when the
> + * blocked load is negligible wrt the estimated utilization of the
> + * already enqueued tasks.
> + */
> + util = max_t(long, util, cpu_util_est(cpu));
> +
We are supposed to discount the task's util from its CPU. But the
cpu_util_est() can potentially return cpu_util() which includes the
task's utilization.
Thanks,
Pavan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project
Powered by blists - more mailing lists