[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28ce8c4b8fc347ed9565a2ccac44a39b@hisilicon.com>
Date: Tue, 20 Apr 2021 22:31:36 +0000
From: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"will@...nel.org" <will@...nel.org>,
"rjw@...ysocki.net" <rjw@...ysocki.net>,
"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
"bp@...en8.de" <bp@...en8.de>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"lenb@...nel.org" <lenb@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"dietmar.eggemann@....com" <dietmar.eggemann@....com>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
"bsegall@...gle.com" <bsegall@...gle.com>,
"mgorman@...e.de" <mgorman@...e.de>
CC: "msys.mizuma@...il.com" <msys.mizuma@...il.com>,
"valentin.schneider@....com" <valentin.schneider@....com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
Jonathan Cameron <jonathan.cameron@...wei.com>,
"juri.lelli@...hat.com" <juri.lelli@...hat.com>,
"mark.rutland@....com" <mark.rutland@....com>,
"sudeep.holla@....com" <sudeep.holla@....com>,
"aubrey.li@...ux.intel.com" <aubrey.li@...ux.intel.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>, "xuwei (O)" <xuwei5@...wei.com>,
"Zengtao (B)" <prime.zeng@...ilicon.com>,
"guodong.xu@...aro.org" <guodong.xu@...aro.org>,
yangyicong <yangyicong@...wei.com>,
"Liguozhu (Kenneth)" <liguozhu@...ilicon.com>,
"linuxarm@...neuler.org" <linuxarm@...neuler.org>,
"hpa@...or.com" <hpa@...or.com>
Subject: RE: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@...ux.intel.com]
> Sent: Wednesday, April 21, 2021 6:32 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@...ilicon.com>;
> catalin.marinas@....com; will@...nel.org; rjw@...ysocki.net;
> vincent.guittot@...aro.org; bp@...en8.de; tglx@...utronix.de;
> mingo@...hat.com; lenb@...nel.org; peterz@...radead.org;
> dietmar.eggemann@....com; rostedt@...dmis.org; bsegall@...gle.com;
> mgorman@...e.de
> Cc: msys.mizuma@...il.com; valentin.schneider@....com;
> gregkh@...uxfoundation.org; Jonathan Cameron <jonathan.cameron@...wei.com>;
> juri.lelli@...hat.com; mark.rutland@....com; sudeep.holla@....com;
> aubrey.li@...ux.intel.com; linux-arm-kernel@...ts.infradead.org;
> linux-kernel@...r.kernel.org; linux-acpi@...r.kernel.org; x86@...nel.org;
> xuwei (O) <xuwei5@...wei.com>; Zengtao (B) <prime.zeng@...ilicon.com>;
> guodong.xu@...aro.org; yangyicong <yangyicong@...wei.com>; Liguozhu (Kenneth)
> <liguozhu@...ilicon.com>; linuxarm@...neuler.org; hpa@...or.com
> Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
>
>
>
> On 3/23/21 4:21 PM, Song Bao Hua (Barry Song) wrote:
>
> >>
> >> On 3/18/21 9:16 PM, Barry Song wrote:
> >>> From: Tim Chen <tim.c.chen@...ux.intel.com>
> >>>
> >>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> >>> is shared among a cluster of cores instead of being exclusive
> >>> to one single core.
> >>>
> >>> To prevent oversubscription of L2 cache, load should be
> >>> balanced between such L2 clusters, especially for tasks with
> >>> no shared data.
> >>>
> >>> Also with cluster scheduling policy where tasks are woken up
> >>> in the same L2 cluster, we will benefit from keeping tasks
> >>> related to each other and likely sharing data in the same L2
> >>> cluster.
> >>>
> >>> Add CPU masks of CPUs sharing the L2 cache so we can build such
> >>> L2 cluster scheduler domain.
> >>>
> >>> Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
> >>> Signed-off-by: Barry Song <song.bao.hua@...ilicon.com>
> >>
> >>
> >> Barry,
> >>
> >> Can you also add this chunk to the patch.
> >> Thanks.
> >
> > Sure, Tim, Thanks. I'll put that into patch 4/4 in v6.
> >
>
> Barry,
>
> This chunk will also need to be added to return cluster id for x86.
> Please add it in your next rev.
Yes. Thanks. I'll put this in either RFC v7 or Patch v1.
For spreading path, things are much easier, though packing path is
quite tricky. But It seems RFC v6 has been quite close to what we want
to achieve to pack related tasks by scanning cluster for tasks within
same NUMA:
https://lore.kernel.org/lkml/20210420001844.9116-1-song.bao.hua@hisilicon.com/
If couples have been already in same LLC(numa), scanning clusters will
gather them further. If they are running in different NUMA nodes, the
original scanning LLC will move them to the same node, after that,
scanning cluster might put them closer to each other.
it seems it is kind of the two-level packing Dietmar has suggested.
So perhaps we won't have RFC v7, I will probably send patch v1 afterwards.
>
> Thanks.
>
> Tim
>
> ---
>
> diff --git a/arch/x86/include/asm/topology.h
> b/arch/x86/include/asm/topology.h
> index 800fa48c9fcd..2548d824f103 100644
> --- a/arch/x86/include/asm/topology.h
> +++ b/arch/x86/include/asm/topology.h
> @@ -109,6 +109,7 @@ extern const struct cpumask *cpu_clustergroup_mask(int cpu);
> #define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id)
> #define topology_logical_die_id(cpu) (cpu_data(cpu).logical_die_id)
> #define topology_die_id(cpu) (cpu_data(cpu).cpu_die_id)
> +#define topology_cluster_id(cpu) (per_cpu(cpu_l2c_id, cpu))
> #define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id)
>
> extern unsigned int __max_die_per_package;
Thanks
Barry
Powered by blists - more mailing lists