[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZuW_0fMfPSix4qqX@yury-ThinkPad>
Date: Sat, 14 Sep 2024 09:54:41 -0700
From: Yury Norov <yury.norov@...il.com>
To: linux-kernel@...r.kernel.org,
Christophe JAILLET <christophe.jaillet@...adoo.fr>
Cc: Chen Yu <yu.c.chen@...el.com>, Leonardo Bras <leobras@...hat.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>
Subject: Re: [PATCH v3 0/3] sched/topology: optimize topology_span_sane()
Ping?
On Mon, Sep 02, 2024 at 11:36:04AM -0700, Yury Norov wrote:
> The function may call cpumask_equal with tl->mask(cpu) == tl->mask(i),
> even when cpu != i. In such case, cpumask_equal() would always return
> true, and we can proceed to the next iteration immediately.
>
> Valentin Schneider shares on it:
>
> PKG can potentially hit that condition, and so can any
> sched_domain_mask_f that relies on the node masks...
>
> I'm thinking ideally we should have checks in place to
> ensure all node_to_cpumask_map[] masks are disjoint,
> then we could entirely skip the levels that use these
> masks in topology_span_sane(), but there's unfortunately
> no nice way to flag them... Also there would be cases
> where there's no real difference between PKG and NODE
> other than NODE is still based on a per-cpu cpumask and
> PKG isn't, so I don't see a nicer way to go about this.
>
> v1: https://lore.kernel.org/lkml/ZrJk00cmVaUIAr4G@yury-ThinkPad/T/
> v2: https://lkml.org/lkml/2024/8/7/1299
> v3:
> - add topology_cpumask_equal() helper in #3;
> - re-use 'cpu' as an iterator int the for_each_cpu() loop;
> - add proper versioning for all patches.
>
> Yury Norov (3):
> sched/topology: pre-compute topology_span_sane() loop params
> sched/topology: optimize topology_span_sane()
> sched/topology: reorganize topology_span_sane() checking order
>
> kernel/sched/topology.c | 29 +++++++++++++++++++++++++----
> 1 file changed, 25 insertions(+), 4 deletions(-)
>
> --
> 2.43.0
Powered by blists - more mailing lists