[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <422b5d06-ec0e-f064-32fe-15df5b2957dd@linux.intel.com>
Date: Tue, 20 Apr 2021 11:31:38 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"will@...nel.org" <will@...nel.org>,
"rjw@...ysocki.net" <rjw@...ysocki.net>,
"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
"bp@...en8.de" <bp@...en8.de>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"lenb@...nel.org" <lenb@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"dietmar.eggemann@....com" <dietmar.eggemann@....com>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
"bsegall@...gle.com" <bsegall@...gle.com>,
"mgorman@...e.de" <mgorman@...e.de>
Cc: "msys.mizuma@...il.com" <msys.mizuma@...il.com>,
"valentin.schneider@....com" <valentin.schneider@....com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
Jonathan Cameron <jonathan.cameron@...wei.com>,
"juri.lelli@...hat.com" <juri.lelli@...hat.com>,
"mark.rutland@....com" <mark.rutland@....com>,
"sudeep.holla@....com" <sudeep.holla@....com>,
"aubrey.li@...ux.intel.com" <aubrey.li@...ux.intel.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>, "xuwei (O)" <xuwei5@...wei.com>,
"Zengtao (B)" <prime.zeng@...ilicon.com>,
"guodong.xu@...aro.org" <guodong.xu@...aro.org>,
yangyicong <yangyicong@...wei.com>,
"Liguozhu (Kenneth)" <liguozhu@...ilicon.com>,
"linuxarm@...neuler.org" <linuxarm@...neuler.org>,
"hpa@...or.com" <hpa@...or.com>
Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
On 3/23/21 4:21 PM, Song Bao Hua (Barry Song) wrote:
>>
>> On 3/18/21 9:16 PM, Barry Song wrote:
>>> From: Tim Chen <tim.c.chen@...ux.intel.com>
>>>
>>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
>>> is shared among a cluster of cores instead of being exclusive
>>> to one single core.
>>>
>>> To prevent oversubscription of L2 cache, load should be
>>> balanced between such L2 clusters, especially for tasks with
>>> no shared data.
>>>
>>> Also with cluster scheduling policy where tasks are woken up
>>> in the same L2 cluster, we will benefit from keeping tasks
>>> related to each other and likely sharing data in the same L2
>>> cluster.
>>>
>>> Add CPU masks of CPUs sharing the L2 cache so we can build such
>>> L2 cluster scheduler domain.
>>>
>>> Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
>>> Signed-off-by: Barry Song <song.bao.hua@...ilicon.com>
>>
>>
>> Barry,
>>
>> Can you also add this chunk to the patch.
>> Thanks.
>
> Sure, Tim, Thanks. I'll put that into patch 4/4 in v6.
>
Barry,
This chunk will also need to be added to return cluster id for x86.
Please add it in your next rev.
Thanks.
Tim
---
diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 800fa48c9fcd..2548d824f103 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -109,6 +109,7 @@ extern const struct cpumask *cpu_clustergroup_mask(int cpu);
#define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id)
#define topology_logical_die_id(cpu) (cpu_data(cpu).logical_die_id)
#define topology_die_id(cpu) (cpu_data(cpu).cpu_die_id)
+#define topology_cluster_id(cpu) (per_cpu(cpu_l2c_id, cpu))
#define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id)
extern unsigned int __max_die_per_package;
Powered by blists - more mailing lists