[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f6bf04e8-3007-4a44-86d8-2cc671c85247@amd.com>
Date: Wed, 5 Feb 2025 15:18:24 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Naman Jain <namjain@...ux.microsoft.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann
<dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin Schneider
<vschneid@...hat.com>
CC: <stable@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Steve Wahl
<steve.wahl@....com>, Saurabh Singh Sengar <ssengar@...ux.microsoft.com>,
<srivatsa@...il.mit.edu>, Michael Kelley <mhklinux@...look.com>
Subject: Re: [PATCH v3] sched/topology: Enable topology_span_sane check only
for debug builds
Hello all,
On 2/3/2025 5:17 PM, Naman Jain wrote:
> [..snip..]
>
> Adding a link to the other patch which is under review.
> https://lore.kernel.org/lkml/20241031200431.182443-1-steve.wahl@hpe.com/
> Above patch tries to optimize the topology sanity check, whereas this
> patch makes it optional. We believe both patches can coexist, as even
> with optimization, there will still be some performance overhead for
> this check.
I would like to discuss this parallelly here. Going back to the original
problem highlighted in [1], the topology_span_sane() came to be as a
result of how drivers/base/arch_topology.c computed the
cpu_coregroup_mask().
[1] https://lore.kernel.org/all/1577088979-8545-1-git-send-email-prime.zeng@hisilicon.com/
Originally described problematic topology is as follows:
**************************
NUMA: 0-2, 3-7
core_siblings: 0-3, 4-7
**************************
with the problematic bit in the handling being:
const struct cpumask *cpu_coregroup_mask(int cpu)
{
const cpumask_t *core_mask = cpumask_of_node(cpu_to_node(cpu));
...
if (last_level_cache_is_valid(cpu)) {
/* If the llc_sibling is subset of node return llc_sibling */
if (cpumask_subset(&cpu_topology[cpu].llc_sibling, core_mask))
core_mask = &cpu_topology[cpu].llc_sibling;
/* else the core_mask remains cpumask_of_node() */
}
...
return core_mask;
}
For CPU3, the llc_sibling 0-3 is not a subset of node mask 3-7, and the
fallback is to use 3-7. For CPUs 4-7, the llc_sibling 4-7 is a subset of
the node mask 3-7 and the coremask is returned a 4-7.
In case of x86 (and perhaps other arch too) the arch/x86 bits ensure
that this inconsistency never happens for !NUMA domains using the
topology IDs. If a set of IDs match between two CPUs, the CPUs are set
in each other's per-CPU topology mask (see link_mask() usage and
match_*() functions in arch/x86/kernel/smpboot.c)
If the set of IDs match with one CPU, it should match with all other
CPUs set in the cpumask for a given topology level. If it doesn't match
with one, it will not match with any other CPUs in the cpumask either.
The cpumasks of two CPUs can either be equal or disjoint at any given
level. Steve's optimization reverses this to check if the the cpumask
of set of CPUs match.
Have there been any reports on an x86 system / VM where
topology_span_sane() was tripped? Looking at the implementation it
does not seem possible (at least to my eyes) with one exception of
AMD Fam 0x15 processors which set "cu_id" and match_smt() will look at
cu_id if the core_id doesn't match between 2 CPUs. It may so happen
that core IDs may match with one set of CPUs and cu_id may match with
another set of CPUs if the information from CPUID is faulty.
What I'm getting to is that the arch specific topology parsing code
can set a "SDTL_ARCH_VERIFIED" flag indicating that the arch specific
bits have verified that the cpumasks are either equal or disjoint and
since sched_debug() is "false" by default, topology_span_sane() can
bail out if:
if (!sched_debug() && (tl->flags & SDTL_ARCH_VERIFIED))
return;
In case arch specific parsing was wrong, "sched_verbose" can always
be used to double check the topology and for the arch that require
this sanity check, Steve's optimized version of
topology_span_sane() can be run to be sure of the sanity.
All this justification is in case folks want to keep
topology_span_sane() around but if no one cares, Naman and Saurabh's
approach works as intended.
--
Thanks and Regards,
Prateek
>
> ---
> kernel/sched/topology.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index c49aea8c1025..b030c1a2121f 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2359,6 +2359,13 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
> {
> int i = cpu + 1;
>
> + /* Skip the topology sanity check for non-debug, as it is a time-consuming operatin */
> + if (!sched_debug()) {
> + pr_info_once("%s: Skipping topology span sanity check. Use `sched_verbose` boot parameter to enable it.\n",
> + __func__);
> + return true;
> + }
> +
> /* NUMA levels are allowed to overlap */
> if (tl->flags & SDTL_OVERLAP)
> return true;
>
> base-commit: 00f3246adeeacbda0bd0b303604e46eb59c32e6e
Powered by blists - more mailing lists