[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <32bcec89-57d5-65e3-970b-affcf4f41667@linux.intel.com>
Date: Mon, 23 Aug 2021 10:49:33 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Barry Song <21cnbao@...il.com>, bp@...en8.de,
catalin.marinas@....com, dietmar.eggemann@....com,
gregkh@...uxfoundation.org, hpa@...or.com, juri.lelli@...hat.com,
bristot@...hat.com, lenb@...nel.org, mgorman@...e.de,
mingo@...hat.com, peterz@...radead.org, rjw@...ysocki.net,
sudeep.holla@....com, tglx@...utronix.de
Cc: aubrey.li@...ux.intel.com, bsegall@...gle.com,
guodong.xu@...aro.org, jonathan.cameron@...wei.com,
liguozhu@...ilicon.com, linux-acpi@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
mark.rutland@....com, msys.mizuma@...il.com,
prime.zeng@...ilicon.com, rostedt@...dmis.org,
valentin.schneider@....com, vincent.guittot@...aro.org,
will@...nel.org, x86@...nel.org, xuwei5@...wei.com,
yangyicong@...wei.com, linuxarm@...wei.com,
Barry Song <song.bao.hua@...ilicon.com>
Subject: Re: [PATCH 3/3] scheduler: Add cluster scheduler level for x86
On 8/19/21 6:30 PM, Barry Song wrote:
> From: Tim Chen <tim.c.chen@...ux.intel.com>
>
> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce is
> shared among a cluster of cores instead of being exclusive to one
> single core.
> To prevent oversubscription of L2 cache, load should be balanced
> between such L2 clusters, especially for tasks with no shared data.
> On benchmark such as SPECrate mcf test, this change provides a
> boost to performance especially on medium load system on Jacobsville.
> on a Jacobsville that has 24 Atom cores, arranged into 6 clusters
> of 4 cores each, the benchmark number is as follow:
>
> Improvement over baseline kernel for mcf_r
> copies run time base rate
> 1 -0.1% -0.2%
> 6 25.1% 25.1%
> 12 18.8% 19.0%
> 24 0.3% 0.3%
>
> So this looks pretty good. In terms of the system's task distribution,
> some pretty bad clumping can be seen for the vanilla kernel without
> the L2 cluster domain for the 6 and 12 copies case. With the extra
> domain for cluster, the load does get evened out between the clusters.
>
> Note this patch isn't an universal win as spreading isn't necessarily
> a win, particually for those workload who can benefit from packing.
I have another patch set to make cluster scheduling selectable at run
time and boot time. Will like to see people's feed back on this patch
set first before sending that out.
Thanks.
Tim
Powered by blists - more mailing lists