[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180124113342.GD30677@codeaurora.org>
Date: Wed, 24 Jan 2018 17:03:42 +0530
From: Pavan Kondeti <pkondeti@...eaurora.org>
To: Patrick Bellasi <patrick.bellasi@....com>
Cc: linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Paul Turner <pjt@...gle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Todd Kjos <tkjos@...roid.com>,
Joel Fernandes <joelaf@...gle.com>,
Steve Muckle <smuckle@...gle.com>, pkondeti@...eaurora.org
Subject: Re: [PATCH v3 2/3] sched/fair: use util_est in LB and WU paths
Hi Patrick,
On Tue, Jan 23, 2018 at 06:08:46PM +0000, Patrick Bellasi wrote:
> static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
> {
> - unsigned long util, capacity;
> + long util, util_est;
>
> /* Task has no contribution or is new */
> if (cpu != task_cpu(p) || !p->se.avg.last_update_time)
> - return cpu_util(cpu);
> + return cpu_util_est(cpu);
>
> - capacity = capacity_orig_of(cpu);
> - util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util(p), 0);
> + /* Discount task's blocked util from CPU's util */
> + util = cpu_util(cpu) - task_util(p);
> + util = max(util, 0L);
>
> - return (util >= capacity) ? capacity : util;
> + if (!sched_feat(UTIL_EST))
> + return util;
At first, It is not clear to me why you are not clamping the capacity to
CPU original capacity. It looks like it is not needed any more with
commit f453ae2200b0 ("sched/fair: Consider RT/IRQ pressure in
capacity_spare_wake()") inclusion. May be a separate patch to remove
the clamping part?
Thanks,
Pavan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
Powered by blists - more mailing lists