lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b4060cb0-6eed-f34a-00d5-4c2f55f20220@arm.com>
Date:   Thu, 25 Jan 2018 09:56:30 -0600
From:   Jeremy Linton <jeremy.linton@....com>
To:     Xiongfeng Wang <wangxiongfeng2@...wei.com>,
        linux-acpi@...r.kernel.org
Cc:     linux-arm-kernel@...ts.infradead.org, sudeep.holla@....com,
        hanjun.guo@...aro.org, lorenzo.pieralisi@....com,
        rjw@...ysocki.net, will.deacon@....com, catalin.marinas@....com,
        gregkh@...uxfoundation.org, viresh.kumar@...aro.org,
        mark.rutland@....com, linux-kernel@...r.kernel.org,
        linux-pm@...r.kernel.org, jhugo@...eaurora.org,
        Jonathan.Zhang@...ium.com, ahs3@...hat.com,
        Jayachandran.Nair@...ium.com, austinwc@...eaurora.org,
        lenb@...nel.org, vkilari@...eaurora.org, morten.rasmussen@....com,
        Juri Lelli <juri.lelli@....com>
Subject: Re: [PATCH v6 11/12] arm64: topology: enable ACPI/PPTT based CPU
 topology

Hi,

On 01/25/2018 06:15 AM, Xiongfeng Wang wrote:
> Hi Jeremy,
> 
> I have tested the patch with the newest UEFI. It prints the below error:
> 
> [    4.017371] BUG: arch topology borken
> [    4.021069] BUG: arch topology borken
> [    4.024764] BUG: arch topology borken
> [    4.028460] BUG: arch topology borken
> [    4.032153] BUG: arch topology borken
> [    4.035849] BUG: arch topology borken
> [    4.039543] BUG: arch topology borken
> [    4.043239] BUG: arch topology borken
> [    4.046932] BUG: arch topology borken
> [    4.050629] BUG: arch topology borken
> [    4.054322] BUG: arch topology borken
> 
> I checked the code and found that the newest UEFI set PPTT physical_package_flag on a physical package node and
> the NUMA domain (SRAT domains) starts from the layer of DIE. (The topology of our board is core->cluster->die->package).

I commented about that on the EDK2 mailing list. While the current spec 
doesn't explicitly ban having the flag set multiple times between the 
leaf and the root I consider it a "bug" and there is an effort to 
clarify the spec and the use of that flag.
> 
> When the kernel starts to build sched_domain, the multi-core sched_domain contains all the cores within a package,
> and the lowest NUMA sched_domain contains all the cores within a die. But the kernel requires that the multi-core
> sched_domain should be a subset of the lowest NUMA sched_domain, so the BUG info is printed.

Right. I've mentioned this problem a couple of times.

At at the moment, the spec isn't clear about how the proximity domain is 
detected/located within the PPTT topology (a node with a 1:1 
correspondence isn't even required). As you can see from this patch set, 
we are making the general assumption that the proximity domains are at 
the same level as the physical socket. This isn't ideal for NUMA 
topologies, like the D05, that don't align with the physical socket.

There are efforts underway to clarify and expand upon the specification 
to deal with this general problem. The simple solution is another flag 
(say PPTT_PROXIMITY_DOMAIN which would map to the D05 die) which could 
be used to find nodes with 1:1 correspondence. At that point we could 
add a fairly trivial patch to correct just the scheduler topology 
without affecting the rest of the system topology code.

> 
> If we modify the UEFI to make NUMA sched_domain start from the layer of package, then all the topology information
> within the package will be discarded. I think we need to build the multi-core sched_domain using the cores within
> the cluster instead of the cores within the package. I think that's what 'multi-core' means. Multi cores form a cluster. I guess.
> If we build the multi-core sched_domain using the cores within a cluster, I think we need to add fields in struct cpu_topology
> to record which cores are in each cluster.

The problem is that there isn't a generic way to identify which level of 
cache sharing is the "correct" top layer MC domain. For one system 
cluster might be appropriate, for another it might be the highest 
caching level within a socket, for another is might be a something in 
between or a group of clusters or LLCs..

Hence the effort to standardize/guarantee a PPTT node that exactly 
matches a SRAT domain. With that, each SOC/system provider has clearly 
defined method for communicating where they want the proximity domain 
information to begin.

Thanks,

> 
> 
> Thanks,
> Xiongfeng
> 
> On 2018/1/13 8:59, Jeremy Linton wrote:
>> Propagate the topology information from the PPTT tree to the
>> cpu_topology array. We can get the thread id, core_id and
>> cluster_id by assuming certain levels of the PPTT tree correspond
>> to those concepts. The package_id is flagged in the tree and can be
>> found by calling find_acpi_cpu_topology_package() which terminates
>> its search when it finds an ACPI node flagged as the physical
>> package. If the tree doesn't contain enough levels to represent
>> all of the requested levels then the root node will be returned
>> for all subsequent levels.
>>
>> Cc: Juri Lelli <juri.lelli@....com>
>> Signed-off-by: Jeremy Linton <jeremy.linton@....com>
>> ---
>>   arch/arm64/kernel/topology.c | 46 +++++++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 45 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
>> index 7b06e263fdd1..ce8ec7fd6b32 100644
>> --- a/arch/arm64/kernel/topology.c
>> +++ b/arch/arm64/kernel/topology.c
>> @@ -11,6 +11,7 @@
>>    * for more details.
>>    */
>>   
>> +#include <linux/acpi.h>
>>   #include <linux/arch_topology.h>
>>   #include <linux/cpu.h>
>>   #include <linux/cpumask.h>
>> @@ -22,6 +23,7 @@
>>   #include <linux/sched.h>
>>   #include <linux/sched/topology.h>
>>   #include <linux/slab.h>
>> +#include <linux/smp.h>
>>   #include <linux/string.h>
>>   
>>   #include <asm/cpu.h>
>> @@ -300,6 +302,46 @@ static void __init reset_cpu_topology(void)
>>   	}
>>   }
>>   
>> +#ifdef CONFIG_ACPI
>> +/*
>> + * Propagate the topology information of the processor_topology_node tree to the
>> + * cpu_topology array.
>> + */
>> +static int __init parse_acpi_topology(void)
>> +{
>> +	bool is_threaded;
>> +	int cpu, topology_id;
>> +
>> +	is_threaded = read_cpuid_mpidr() & MPIDR_MT_BITMASK;
>> +
>> +	for_each_possible_cpu(cpu) {
>> +		topology_id = find_acpi_cpu_topology(cpu, 0);
>> +		if (topology_id < 0)
>> +			return topology_id;
>> +
>> +		if (is_threaded) {
>> +			cpu_topology[cpu].thread_id = topology_id;
>> +			topology_id = find_acpi_cpu_topology(cpu, 1);
>> +			cpu_topology[cpu].core_id   = topology_id;
>> +			topology_id = find_acpi_cpu_topology_package(cpu);
>> +			cpu_topology[cpu].package_id = topology_id;
>> +		} else {
>> +			cpu_topology[cpu].thread_id  = -1;
>> +			cpu_topology[cpu].core_id    = topology_id;
>> +			topology_id = find_acpi_cpu_topology_package(cpu);
>> +			cpu_topology[cpu].package_id = topology_id;
>> +		}
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +#else
>> +static inline int __init parse_acpi_topology(void)
>> +{
>> +	return -EINVAL;
>> +}
>> +#endif
>>   
>>   void __init init_cpu_topology(void)
>>   {
>> @@ -309,6 +351,8 @@ void __init init_cpu_topology(void)
>>   	 * Discard anything that was parsed if we hit an error so we
>>   	 * don't use partial information.
>>   	 */
>> -	if (of_have_populated_dt() && parse_dt_topology())
>> +	if ((!acpi_disabled) && parse_acpi_topology())
>> +		reset_cpu_topology();
>> +	else if (of_have_populated_dt() && parse_dt_topology())
>>   		reset_cpu_topology();
>>   }
>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ