[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5a5381cd-813d-7cef-9948-01c3e5e910ef@arm.com>
Date: Tue, 29 Mar 2022 20:55:08 +0200
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Phil Auld <pauld@...hat.com>
Cc: linux-kernel@...r.kernel.org,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Peter Zijlstra <peterz@...radead.org>,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH] arch/arm64: Fix topology initialization for core
scheduling
On 29/03/2022 17:20, Phil Auld wrote:
> On Tue, Mar 29, 2022 at 04:02:22PM +0200 Dietmar Eggemann wrote:
>> On 22/03/2022 17:03, Phil Auld wrote:
[...]
>> I assume this is for a machine which relies on MPIDR-based setup
>> (package_id == -1)? I.e. it doesn't have proper ACPI/(DT) data for
>> topology setup.
>
> Yes, that's my understanding. No PPTT.
>
>>
>> Tried on a ThunderX2 by disabling parse_acpi_topology() but then I end
>> up with a machine w/o SMT, so `stress-ng --prctl N` doesn't show this issue.
>>
>> Which machine were you using?
>
> This instance is an HPE Apollo 70 set to smt-4. I believe it's ThunderX2
> chips.
>
> ARM (CN9980-2200LG4077-Y21-G)
I'm using the same processor just with ACPI/PPTT.
# sudo dmidecode -t 4 | grep "Part Number"
Part Number: CN9980-2200LG4077-21-Y-G
Part Number: CN9980-2200LG4077-21-Y-G
# cat /sys/devices/system/cpu/cpu0/topology/thread_siblings
0,32,64,96
# cat /sys/kernel/debug/sched/domains/cpu0/domain*/name
SMT
MC
NUMA
But no matter whether I disable parse_acpi_topology() or just force
`cpu_topology[cpu].package_id = -1` in this function, I always end up with:
# cat /sys/kernel/debug/sched/domains/cpu0/domain*/name
MC
NUMA
# cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list
0
so no SMT sched domain. The MPIDR-based topology fallback code in
store_cpu_topology() forces `cpuid_topo->thread_id = -1`.
IMHO this is why on my machine I don't see this issue while running:
root@...-apollo7007:~# stress-ng --prctl 256 -t 60
stress-ng: info: [2388042] dispatching hogs: 256 prctl
Is there something I miss in my setup to provoke this issue?
[...]
Powered by blists - more mailing lists