[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef199f2a-f970-4c86-a3f2-ddb6ad7abc96@linux.ibm.com>
Date: Tue, 11 Feb 2025 11:22:02 +0530
From: Shrikanth Hegde <sshegde@...ux.ibm.com>
To: K Prateek Nayak <kprateek.nayak@....com>,
Naman Jain <namjain@...ux.microsoft.com>
Cc: stable@...r.kernel.org, linux-kernel@...r.kernel.org,
Steve Wahl <steve.wahl@....com>,
Saurabh Singh Sengar <ssengar@...ux.microsoft.com>,
srivatsa@...il.mit.edu, Michael Kelley <mhklinux@...look.com>,
Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>
Subject: Re: [PATCH v3] sched/topology: Enable topology_span_sane check only
for debug builds
On 2/5/25 15:18, K Prateek Nayak wrote:
> Hello all,
>
> On 2/3/2025 5:17 PM, Naman Jain wrote:
>> [..snip..]
>>
>> Adding a link to the other patch which is under review.
>> https://lore.kernel.org/lkml/20241031200431.182443-1-steve.wahl@hpe.com/
>> Above patch tries to optimize the topology sanity check, whereas this
>> patch makes it optional. We believe both patches can coexist, as even
>> with optimization, there will still be some performance overhead for
>> this check.
>
> I would like to discuss this parallelly here. Going back to the original
> problem highlighted in [1], the topology_span_sane() came to be as a
> result of how drivers/base/arch_topology.c computed the
> cpu_coregroup_mask().
>
> [1] https://lore.kernel.org/all/1577088979-8545-1-git-send-email-
> prime.zeng@...ilicon.com/
>
> Originally described problematic topology is as follows:
>
> **************************
> NUMA: 0-2, 3-7
> core_siblings: 0-3, 4-7
> **************************
>
> with the problematic bit in the handling being:
>
> const struct cpumask *cpu_coregroup_mask(int cpu)
> {
> const cpumask_t *core_mask =
> cpumask_of_node(cpu_to_node(cpu));
>
> ...
>
> if (last_level_cache_is_valid(cpu)) {
> /* If the llc_sibling is subset of node return
> llc_sibling */
> if (cpumask_subset(&cpu_topology[cpu].llc_sibling,
> core_mask))
> core_mask = &cpu_topology[cpu].llc_sibling;
>
> /* else the core_mask remains cpumask_of_node() */
> }
>
> ...
>
> return core_mask;
> }
>
> For CPU3, the llc_sibling 0-3 is not a subset of node mask 3-7, and the
> fallback is to use 3-7. For CPUs 4-7, the llc_sibling 4-7 is a subset of
> the node mask 3-7 and the coremask is returned a 4-7.
>
> In case of x86 (and perhaps other arch too) the arch/x86 bits ensure
> that this inconsistency never happens for !NUMA domains using the
> topology IDs. If a set of IDs match between two CPUs, the CPUs are set
> in each other's per-CPU topology mask (see link_mask() usage and
> match_*() functions in arch/x86/kernel/smpboot.c)
>
> If the set of IDs match with one CPU, it should match with all other
> CPUs set in the cpumask for a given topology level. If it doesn't match
> with one, it will not match with any other CPUs in the cpumask either.
> The cpumasks of two CPUs can either be equal or disjoint at any given
> level. Steve's optimization reverses this to check if the the cpumask
> of set of CPUs match.
>
> Have there been any reports on an x86 system / VM where
> topology_span_sane() was tripped? Looking at the implementation it
> does not seem possible (at least to my eyes) with one exception of
> AMD Fam 0x15 processors which set "cu_id" and match_smt() will look at
> cu_id if the core_id doesn't match between 2 CPUs. It may so happen
> that core IDs may match with one set of CPUs and cu_id may match with
> another set of CPUs if the information from CPUID is faulty.
>
> What I'm getting to is that the arch specific topology parsing code
> can set a "SDTL_ARCH_VERIFIED" flag indicating that the arch specific
> bits have verified that the cpumasks are either equal or disjoint and
> since sched_debug() is "false" by default, topology_span_sane() can
> bail out if:
>
> if (!sched_debug() && (tl->flags & SDTL_ARCH_VERIFIED))
> return;
>
it would simpler to use sched_debug(). no?
Since it can be enabled at runtime by "echo Y > verbose", if one one
needs to enable even after boot. Wouldn't that suffice to run
topology_span_sane by doing a hotplug?
> In case arch specific parsing was wrong, "sched_verbose" can always
> be used to double check the topology and for the arch that require
> this sanity check, Steve's optimized version of
> topology_span_sane() can be run to be sure of the sanity.
>
> All this justification is in case folks want to keep
> topology_span_sane() around but if no one cares, Naman and Saurabh's
> approach works as intended.
>
Powered by blists - more mailing lists