[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241106180613.GQ10375@noisy.programming.kicks-ass.net>
Date: Wed, 6 Nov 2024 19:06:13 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Yury Norov <yury.norov@...il.com>
Cc: linux-kernel@...r.kernel.org,
Christophe JAILLET <christophe.jaillet@...adoo.fr>,
Chen Yu <yu.c.chen@...el.com>, Leonardo Bras <leobras@...hat.com>,
Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>
Subject: Re: [PATCH v3 0/3] sched/topology: optimize topology_span_sane()
On Mon, Sep 02, 2024 at 11:36:04AM -0700, Yury Norov wrote:
> The function may call cpumask_equal with tl->mask(cpu) == tl->mask(i),
> even when cpu != i. In such case, cpumask_equal() would always return
> true, and we can proceed to the next iteration immediately.
>
> Valentin Schneider shares on it:
>
> PKG can potentially hit that condition, and so can any
> sched_domain_mask_f that relies on the node masks...
>
> I'm thinking ideally we should have checks in place to
> ensure all node_to_cpumask_map[] masks are disjoint,
> then we could entirely skip the levels that use these
> masks in topology_span_sane(), but there's unfortunately
> no nice way to flag them... Also there would be cases
> where there's no real difference between PKG and NODE
> other than NODE is still based on a per-cpu cpumask and
> PKG isn't, so I don't see a nicer way to go about this.
>
> v1: https://lore.kernel.org/lkml/ZrJk00cmVaUIAr4G@yury-ThinkPad/T/
> v2: https://lkml.org/lkml/2024/8/7/1299
> v3:
> - add topology_cpumask_equal() helper in #3;
> - re-use 'cpu' as an iterator int the for_each_cpu() loop;
> - add proper versioning for all patches.
>
> Yury Norov (3):
> sched/topology: pre-compute topology_span_sane() loop params
> sched/topology: optimize topology_span_sane()
> sched/topology: reorganize topology_span_sane() checking order
Why are we doing this? Subject says optimize, but I see no performance
numbers?
Powered by blists - more mailing lists