[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <988be709-f2f5-9dbb-3f17-1fc45f665e58@huawei.com>
Date: Thu, 24 Oct 2024 22:47:51 +0800
From: Yicong Yang <yangyicong@...wei.com>
To: Pierre Gondois <pierre.gondois@....com>
CC: <catalin.marinas@....com>, <will@...nel.org>, <sudeep.holla@....com>,
<tglx@...utronix.de>, <peterz@...radead.org>, <mpe@...erman.id.au>,
<linux-arm-kernel@...ts.infradead.org>, <mingo@...hat.com>, <bp@...en8.de>,
<dave.hansen@...ux.intel.com>, <dietmar.eggemann@....com>,
<yangyicong@...ilicon.com>, <linuxppc-dev@...ts.ozlabs.org>,
<x86@...nel.org>, <linux-kernel@...r.kernel.org>, <morten.rasmussen@....com>,
<msuchanek@...e.de>, <gregkh@...uxfoundation.org>, <rafael@...nel.org>,
<jonathan.cameron@...wei.com>, <prime.zeng@...ilicon.com>,
<linuxarm@...wei.com>, <xuwei5@...wei.com>, <guohanjun@...wei.com>
Subject: Re: [PATCH v6 2/4] arch_topology: Support SMT control for OF based
system
On 2024/10/23 23:43, Pierre Gondois wrote:
> Hello Yicong,
>
> On 10/15/24 04:18, Yicong Yang wrote:
>> From: Yicong Yang <yangyicong@...ilicon.com>
>>
>> On building the topology from the devicetree, we've already
>> gotten the SMT thread number of each core. Update the largest
>> SMT thread number and enable the SMT control by the end of
>> topology parsing.
>>
>> The core's SMT control provides two interface to the users [1]:
>> 1) enable/disable SMT by writing on/off
>> 2) enable/disable SMT by writing thread number 1/max_thread_number
>>
>> If a system have more than one SMT thread number the 2) may
>> not handle it well, since there're multiple thread numbers in the
>> system and 2) only accept 1/max_thread_number. So issue a warning
>> to notify the users if such system detected.
>>
>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/ABI/testing/sysfs-devices-system-cpu#n542
>> Signed-off-by: Yicong Yang <yangyicong@...ilicon.com>
>> ---
>> drivers/base/arch_topology.c | 21 +++++++++++++++++++++
>> 1 file changed, 21 insertions(+)
>>
>> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
>> index 75fcb75d5515..5eed864df5e6 100644
>> --- a/drivers/base/arch_topology.c
>> +++ b/drivers/base/arch_topology.c
>> @@ -11,6 +11,7 @@
>> #include <linux/cleanup.h>
>> #include <linux/cpu.h>
>> #include <linux/cpufreq.h>
>> +#include <linux/cpu_smt.h>
>> #include <linux/device.h>
>> #include <linux/of.h>
>> #include <linux/slab.h>
>> @@ -28,6 +29,7 @@
>> static DEFINE_PER_CPU(struct scale_freq_data __rcu *, sft_data);
>> static struct cpumask scale_freq_counters_mask;
>> static bool scale_freq_invariant;
>> +static unsigned int max_smt_thread_num;
>> DEFINE_PER_CPU(unsigned long, capacity_freq_ref) = 1;
>> EXPORT_PER_CPU_SYMBOL_GPL(capacity_freq_ref);
>> @@ -561,6 +563,17 @@ static int __init parse_core(struct device_node *core, int package_id,
>> i++;
>> } while (1);
>> + if (max_smt_thread_num < i)
>> + max_smt_thread_num = i;
>
> Shouldn't the conditions above/below be inverted ?
> I.e. (max_smt_thread_num != i) should never be true if there is
> max_smt_thread_num = i;
> just before
>
you're right. will get this fixed. thanks for catching this.
Thanks.
>> +
>> + /*
>> + * If max_smt_thread_num has been initialized and doesn't match
>> + * the thread number of this entry, then the system has
>> + * heterogeneous SMT topology.
>> + */
>> + if (max_smt_thread_num && max_smt_thread_num != i)
>> + pr_warn_once("Heterogeneous SMT topology is partly supported by SMT control\n");
>> +
>> cpu = get_cpu_for_node(core);
>> if (cpu >= 0) {
>> if (!leaf) {
>> @@ -673,6 +686,14 @@ static int __init parse_socket(struct device_node *socket)
>> if (!has_socket)
>> ret = parse_cluster(socket, 0, -1, 0);
>> + /*
>> + * Notify the CPU framework of the SMT support. A thread number of 1
>> + * can be handled by the framework so we don't need to check
>> + * max_smt_thread_num to see we support SMT or not.
>> + */
>> + if (max_smt_thread_num)
>> + cpu_smt_set_num_threads(max_smt_thread_num, max_smt_thread_num);
>> +
>> return ret;
>> }
>>
>
> .
Powered by blists - more mailing lists