[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xhsmhsfghcb20.mognet@vschneid.remote.csb>
Date: Wed, 11 Jan 2023 16:04:23 +0000
From: Valentin Schneider <vschneid@...hat.com>
To: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ricardo Neri <ricardo.neri@...el.com>,
"Ravi V. Shankar" <ravi.v.shankar@...el.com>,
Ben Segall <bsegall@...gle.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Len Brown <len.brown@...el.com>, Mel Gorman <mgorman@...e.de>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Steven Rostedt <rostedt@...dmis.org>,
Tim Chen <tim.c.chen@...ux.intel.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, "Tim C . Chen" <tim.c.chen@...el.com>
Subject: Re: [PATCH v2 1/7] sched/fair: Generalize asym_packing logic for
SMT local sched group
On 28/12/22 20:00, Ricardo Neri wrote:
> On Thu, Dec 22, 2022 at 04:55:58PM +0000, Valentin Schneider wrote:
>> Some of this is new to me - I had missed the original series introducing
>> this. However ISTM that this is conflating two concepts: SMT occupancy
>> balancing, and asym packing.
>>
>> Take the !local_is_smt :: sg_busy_cpus >= 2 :: return true; path. It does
>> not involve asym packing priorities at all. This can end up in an
>> ASYM_PACKING load balance,
>
> Yes, this the desired result: an idle low-priority CPU should help a high-
> priority core with more than one busy SMT sibling. But yes, it does not
> relate to priorities and can be implemented differently.
>
>> which per calculate_imbalance() tries to move
>> *all* tasks to the higher priority CPU - in the case of SMT balancing,
>> we don't want to totally empty the core, just its siblings.
>
> But it will not empty the core, only one of its SMT siblings. A single
> sibling will be selected in find_busiest_queue(). The other siblings will
> be unaffected.
>
Right
>>
>> Is there an ITMT/big.LITTLE (or however x86 calls it) case that invalidates
>> the above?
>
> Please see below.
>
>>
>> Say, what's not sufficient with the below? AFAICT the only thing that isn't
>> covered is the sg_busy_cpus >= 2 thing, but IMO that's SMT balancing, not
>> asym packing - if the current calculate_imbalance() doesn't do it, it
>> should be fixed to do it.
>
> Agreed.
>
>>Looking at the
>>
>> local->group_type == group_has_spare
>>
>> case, it looks like it should DTRT.
>
> I had tried (and failed) to have find_busiest_group() handle the
> !local_is_smt :: sg_busy_cpus >= 2 case. Now I think I made it work.
>
> When the busiest group is not overloaded, find_busiest_group() and
> local->group = group_has_spare during an idle load balance events the
> number of *idle* CPUs. However, this does not work if the local and busiest
> groups have different weights. In SMT2, for instance, if busiest has 2 busy
> CPUs (i.e., 0 idle CPUs) and the destination CPU is idle, the difference in
> the number of idle CPUs is 1. find_busiest_group() will incorrectly goto
> out_balanced.
>
> This issue very visible in Intel hybrid processors because the big cores
> have SMT but the small cores do not. It can, however, be reproduced in non-
> hybrid processors by offlining the SMT siblings of some cores.
>
I think I follow. If we're comparing two groups each spanning an SMT2 core,
then
busiest->group_weight > 1 && local->idle_cpus <= (busiest->idle_cpus + 1)
is false if local is fully idle and busiest fully busy, but if local
becomes a non-SMT core, then that's true and we goto out_balanced.
With that said, shouldn't SD_PREFER_SIBLING help here? cf.
if (sds.prefer_sibling && local->group_type == group_has_spare &&
busiest->sum_nr_running > local->sum_nr_running + 1)
It should be set on any topology level below the NUMA ones, we do remove it
on SD_ASYM_CPUCAPACITY levels because this used to interfere with misfit
balancing (it would override the group_type), things are a bit different
since Vincent's rewrite of load_balance() but I think we still want it off
there. I would expect it to be set in your system, though whether this is
playing nice with the asymmetry is another matter :-)
> The problem can be fixed by instead balancing the number of *busy* CPUs,
> which is what in general we want, IMO. (When two groups have the same
> weight, it is equivalent to balancing the number of idle CPUs).
>
> This patch worked for me:
>
> @@ -9787,14 +9787,18 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
> lsub_positive(&nr_diff, local->sum_nr_running);
> env->imbalance = nr_diff;
> } else {
> + unsigned int busiest_busy_cpus, local_busy_cpus;
> +
> + busiest_busy_cpus = busiest->group_weight - busiest->idle_cpus;
> + local_busy_cpus = local->group_weight - local->idle_cpus;
>
> /*
> * If there is no overload, we just want to even the number of
> - * idle cpus.
> + * busy cpus.
> */
> env->migration_type = migrate_task;
> - env->imbalance = max_t(long, 0,
> - (local->idle_cpus - busiest->idle_cpus));
> + env->imbalance = max_t(long, 0 ,
> + (busiest_busy_cpus - local_busy_cpus));
> }
>
> #ifdef CONFIG_NUMA
> @@ -9981,18 +9985,24 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
> */
> goto out_balanced;
>
> - if (busiest->group_weight > 1 &&
> - local->idle_cpus <= (busiest->idle_cpus + 1))
> - /*
> - * If the busiest group is not overloaded
> - * and there is no imbalance between this and busiest
> - * group wrt idle CPUs, it is balanced. The imbalance
> - * becomes significant if the diff is greater than 1
> - * otherwise we might end up to just move the imbalance
> - * on another group. Of course this applies only if
> - * there is more than 1 CPU per group.
> - */
> - goto out_balanced;
> + if (busiest->group_weight > 1) {
> + unsigned int local_busy_cpus, busiest_busy_cpus;
> +
> + local_busy_cpus = local->group_weight - local->idle_cpus;
> + busiest_busy_cpus = busiest->group_weight - busiest->idle_cpus;
> +
> + if (busiest_busy_cpus <= local_busy_cpus + 1)
> + /*
> + * If the busiest group is not overloaded
> + * and there is no imbalance between this and busiest
> + * group wrt busy CPUs, it is balanced. The imbalance
> + * becomes significant if the diff is greater than 1
> + * otherwise we might end up to just move the imbalance
> + * on another group. Of course this applies only if
> + * there is more than 1 CPU per group.
> + */
> + goto out_balanced;
> + }
>
> if (busiest->sum_h_nr_running == 1)
> /*
>
> With this I can remove the sg_busy_cpus >=2 thing from asym_smt_can_pull_tasks().
>
>>
>> ---
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 224107278471f..15eb2d3cff186 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -9176,12 +9176,15 @@ static inline bool
>> sched_asym(struct lb_env *env, struct sd_lb_stats *sds, struct sg_lb_stats *sgs,
>> struct sched_group *group)
>> {
>> - /* Only do SMT checks if either local or candidate have SMT siblings */
>> - if ((sds->local->flags & SD_SHARE_CPUCAPACITY) ||
>> - (group->flags & SD_SHARE_CPUCAPACITY))
>> - return asym_smt_can_pull_tasks(env->dst_cpu, sds, sgs, group);
>> + /*
>> + * For SMT, env->idle != CPU_NOT_IDLE isn't sufficient, we need to make
>> + * sure the whole core is idle.
>> + */
>> + if (((sds->local->flags & SD_SHARE_CPUCAPACITY) ||
>> + (group->flags & SD_SHARE_CPUCAPACITY)) &&
>> + !test_idle_cores(env->dst_cpu))
>
> But test_idle_cores() tests for *any* idle core in the highest domain with the
> SD_SHARE_PKG_RESOURCES flag. Here we are only interested in the SMT siblings
> of env->dst_cpu. If is_core_idle(env->dst_cpu) is used, then I agree with the
> change.
>
Right, I had gotten confused with test_idle_cores()
> Thanks and BR,
> Ricardo
Powered by blists - more mailing lists