[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <32e0b995-eb81-42bf-904b-225a3b7c0e87@linux.ibm.com>
Date: Wed, 23 Oct 2024 20:46:53 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: Steve Wahl <steve.wahl@....com>, Valentin Schneider <vschneid@...hat.com>
Cc: Russ Anderson <rja@....com>, Dimitri Sivanich <sivanich@....com>,
Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/topology: improve topology_span_sane speed
On 10/10/24 21:21, Steve Wahl wrote:
> Use a different approach to topology_span_sane(), that checks for the
> same constraint of no partial overlaps for any two CPU sets for
> non-NUMA topology levels, but does so in a way that is O(N) rather
> than O(N^2).
>
> Instead of comparing with all other masks to detect collisions, keep
> one mask that includes all CPUs seen so far and detect collisions with
> a single cpumask_intersects test.
>
> If the current mask has no collisions with previously seen masks, it
> should be a new mask, which can be uniquely identified by the lowest
> bit set in this mask. Keep a pointer to this mask for future
> reference (in an array indexed by the lowest bit set), and add the
> CPUs in this mask to the list of those seen.
>
> If the current mask does collide with previously seen masks, it should
> be exactly equal to a mask seen before, looked up in the same array
> indexed by the lowest bit set in the mask, a single comparison.
>
> Move the topology_span_sane() check out of the existing topology level
> loop, let it use its own loop so that the array allocation can be done
> only once, shared across levels.
>
> On a system with 1920 processors (16 sockets, 60 cores, 2 threads),
> the average time to take one processor offline is reduced from 2.18
> seconds to 1.01 seconds. (Off-lining 959 of 1920 processors took
> 34m49.765s without this change, 16m10.038s with this change in place.)
>
> Signed-off-by: Steve Wahl <steve.wahl@....com>
I was trying to go through this issue and observed below.
Looks like the computations are repeated in below manner.
Assume SMT4 system.
[[0 2 4 6] [1 3 5 7] ] [ [8 10 12 14] [9 11 13 15] ]
<--SMT--> <--SMT--> <---SMT----> <----SMT--->
<---------PKG----------> <------------PKG------------->
Lets say it happening for CPU0, at SMT level, then it will do, masking
in below manner.
2: [0 2 4 6] & [0 2 4 6]
4: [0 2 4 6] & [0 2 4 6]
6: [0 2 4 6] & [0 2 4 6]
1: [0 2 4 6] & [1 3 5 7]
3: [0 2 4 6] & [1 3 5 7]
5: [0 2 4 6] & [1 3 5 7]
7: [0 2 4 6] & [1 3 5 7]
8: [0 2 4 6] & [8 10 12 14]
10:[0 2 4 6] & [8 10 12 14]
12:[0 2 4 6] & [8 10 12 14]
14:[0 2 4 6] & [8 10 12 14]
9: [0 2 4 6] & [9 11 13 15]
11:[0 2 4 6] & [9 11 13 15]
13:[0 2 4 6] & [9 11 13 15]
15:[0 2 4 6] & [9 11 13 15]
And when it happens for CPU2, it will do the exact same computation.
Maybe that can be avoided with something like below. Do the computation
if it is the first cpu in that topology level mask.
Not sure if it works in all scenarios. Tested very very lightly on
power10 system with SMT=4.
Please correct me if i got it all wrong.
------------------------------------------------------------------------
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 9748a4c8d668..541631ca32bd 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2367,6 +2367,13 @@ static bool topology_span_sane(struct
sched_domain_topology_level *tl,
if (tl->flags & SDTL_OVERLAP)
return true;
+ /* Do the computation only if this cpu is first CPU
+ * in the topology level mask. Same computation is kind of
+ * Repetitions on other CPUS */
+ if (!(cpu == cpumask_first(tl->mask(cpu)))) {
+ return true;
+ }
+
/*
* Non-NUMA levels cannot partially overlap - they must be either
* completely equal or completely disjoint. Otherwise we can end up
Powered by blists - more mailing lists