[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xhsmhy159mz0g.mognet@vschneid-thinkpadt14sgen2i.remote.csb>
Date: Tue, 06 Aug 2024 17:50:23 +0200
From: Valentin Schneider <vschneid@...hat.com>
To: Yury Norov <yury.norov@...il.com>, linux-kernel@...r.kernel.org
Cc: Yury Norov <yury.norov@...il.com>, Christophe JAILLET
<christophe.jaillet@...adoo.fr>, Leonardo Bras <leobras@...hat.com>, Ingo
Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Juri
Lelli <juri.lelli@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel
Gorman <mgorman@...e.de>
Subject: Re: [PATCH 2/2] sched/topology: optimize topology_span_sane()
On 02/08/24 10:57, Yury Norov wrote:
> The function may call cpumask_equal with tl->mask(cpu) == tl->mask(i),
> even when cpu != i.
For which architecture have you observed this? AFAIA all implementations of
tl->sched_domain_mask_f are built on a per-CPU cpumask.
e.g. for the default topology, we have:
cpu_smt_mask() -> topology_sibling_cpumask()
which is implemented as:
arch/loongarch/include/asm/topology.h:35:#define topology_sibling_cpumask(cpu) (&cpu_sibling_map[cpu])
arch/mips/include/asm/topology.h:18:#define topology_sibling_cpumask(cpu) (&cpu_sibling_map[cpu])
arch/powerpc/include/asm/topology.h:139:#define topology_sibling_cpumask(cpu) (per_cpu(cpu_sibling_map, cpu))
arch/s390/include/asm/topology.h:31:#define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_mask)
arch/sparc/include/asm/topology_64.h:50:#define topology_sibling_cpumask(cpu) (&per_cpu(cpu_sibling_map, cpu))
arch/x86/include/asm/topology.h:186:#define topology_sibling_cpumask(cpu) (per_cpu(cpu_sibling_map, cpu))
include/linux/arch_topology.h:91:#define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling)
include/linux/topology.h:218:#define topology_sibling_cpumask(cpu) cpumask_of(cpu)
and ditto for cpu_coregroup_mask() & cpu_cpu_mask().
Powered by blists - more mailing lists