[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8a75f47-0f7e-14cc-adf4-2854e235b26e@arm.com>
Date: Mon, 13 Feb 2023 13:44:20 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ricardo Neri <ricardo.neri@...el.com>,
"Ravi V. Shankar" <ravi.v.shankar@...el.com>,
Ben Segall <bsegall@...gle.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Len Brown <len.brown@...el.com>, Mel Gorman <mgorman@...e.de>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Steven Rostedt <rostedt@...dmis.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Valentin Schneider <vschneid@...hat.com>,
Ionela Voinescu <ionela.voinescu@....com>, x86@...nel.org,
linux-kernel@...r.kernel.org, "Tim C . Chen" <tim.c.chen@...el.com>
Subject: Re: [PATCH v3 07/10] sched/fair: Do not even the number of busy CPUs
via asym_packing
On 07/02/2023 05:58, Ricardo Neri wrote:
[...]
> @@ -9269,33 +9264,11 @@ static bool asym_smt_can_pull_tasks(int dst_cpu, struct sd_lb_stats *sds,
> struct sched_group *sg)
> {
> #ifdef CONFIG_SCHED_SMT
> - bool local_is_smt;
> int sg_busy_cpus;
>
> - local_is_smt = sds->local->flags & SD_SHARE_CPUCAPACITY;
> sg_busy_cpus = sgs->group_weight - sgs->idle_cpus;
>
> - if (!local_is_smt) {
> - /*
> - * If we are here, @dst_cpu is idle and does not have SMT
> - * siblings. Pull tasks if candidate group has two or more
> - * busy CPUs.
> - */
> - if (sg_busy_cpus >= 2) /* implies sg_is_smt */
> - return true;
> -
> - /*
> - * @dst_cpu does not have SMT siblings. @sg may have SMT
> - * siblings and only one is busy. In such case, @dst_cpu
> - * can help if it has higher priority and is idle (i.e.,
> - * it has no running tasks).
> - */
> - return sched_asym_prefer(dst_cpu, sg->asym_prefer_cpu);
> - }
> -
> /*
> - * @dst_cpu has SMT siblings and are also idle.
> - *
> * If the difference in the number of busy CPUs is two or more, let
> * find_busiest_group() take care of it. We only care if @sg has
> * exactly one busy CPU. This covers SMT and non-SMT sched groups.
Can't this be made lighter by removing asym_smt_can_pull_tasks() and
putting the logic to exclude the call to sched_asym_prefer() into
sched_asym() directly?
Not sure if we need the CONFIG_SCHED_SMT since it's all guarded by
`flags & SD_SHARE_CPUCAPACITY` already, which is only set under.
CONFIG_SCHED_SMT.
static inline bool
sched_asym(struct lb_env *env, struct sd_lb_stats *sds,
struct sg_lb_stats *sgs, struct sched_group *group)
{
bool local_is_smt = sds->local->flags & SD_SHARE_CPUCAPACITY;
if (local_is_smt && !is_core_idle(env->dst_cpu))
return false;
if ((local_is_smt || group->flags & SD_SHARE_CPUCAPACITY)) {
int sg_busy_cpus = sgs->group_weight - sgs->idle_cpus;
if (sg_busy_cpus != 1)
return false;
}
return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
}
[...]
Powered by blists - more mailing lists