[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <06aa5fef-9b72-0f6d-9070-831a0c9b8db0@arm.com>
Date: Mon, 13 Jan 2020 13:22:17 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: "Zengtao (B)" <prime.zeng@...ilicon.com>,
Valentin Schneider <valentin.schneider@....com>,
Morten Rasmussen <morten.rasmussen@....com>
Cc: Sudeep Holla <sudeep.holla@....com>,
Linuxarm <linuxarm@...wei.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations conflicts with
lower layer
On 13.01.20 13:08, Zengtao (B) wrote:
>> -----Original Message-----
>> From: Valentin Schneider [mailto:valentin.schneider@....com]
>> Sent: Monday, January 13, 2020 7:17 PM
>> To: Zengtao (B); Morten Rasmussen
>> Cc: Sudeep Holla; Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
>> linux-kernel@...r.kernel.org
>> Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations
>> conflicts with lower layer
>>
>> On 13/01/2020 06:51, Zengtao (B) wrote:
>>> I have tried both, this previous one don't work. But this one seems
>> work
>>> correctly with the warning message printout as expected.
>>>
>>
>> Thanks for trying it out.
>>
>>> This patch is based on the fact " non-NUMA spans shouldn't overlap ",
>> I am
>>> not quite sure if this is always true?
>>>
>>
>> I think this is required for get_group() to work properly. Otherwise,
>> successive get_group() calls may override (and break) the sd->groups
>> linking as you initially reported.
>>
>> In your example, for MC level we have
>>
>> tl->mask(3) == 3-7
>> tl->mask(4) == 4-7
>>
>> Which partially overlaps, causing the relinking of '7->3' to '7->4'. Valid
>> configurations would be
>>
>> wholly disjoint:
>> tl->mask(3) == 0-3
>> tl->maks(4) == 4-7
>>
>> equal:
>> tl->mask(3) == 3-7
>> tl->mask(4) == 3-7
>>
>>> Anyway, Could you help to raise the new patch?
>>>
>>
>> Ideally I'd like to be able to reproduce this locally first (TBH I'd like
>> to get my first suggestion to work since it's less intrusive). Could you
>> share how you were able to trigger this? Dietmar's been trying to
>> reproduce
>> this with qemu but I don't think he's there just yet.
>
> Do you have got a hardware platform with clusters?what's the hardware
> Cpu topology?
I can test this with:
sudo qemu-system-aarch64 -kernel ./Image -hda ./qemu-image-aarch64.img
-append 'root=/dev/vda console=ttyAMA0 loglevel=8 sched_debug' -smp
cores=8 --nographic -m 512 -cpu cortex-a53 -machine virt -numa
node,cpus=0-2,nodeid=0 -numa node,cpus=3-7,nodeid=1
and
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 1eb81f113786..e941a402e5f1 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -465,6 +465,9 @@ void update_siblings_masks(unsigned int cpuid)
if (cpuid_topo->package_id != cpu_topo->package_id)
continue;
+ if ((cpu < 4 && cpuid > 3) || (cpu > 3 && cpuid < 4))
+ continue;
+
cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
on mainline qemu. I do need the hack in update_siblings_masks() since
the topology detection via -smp cores=X, sockets=Y doesn't work yet.
Powered by blists - more mailing lists