[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZSDPGjO+hkD0AjJ/@chenyu5-mobl2.ccr.corp.intel.com>
Date: Sat, 7 Oct 2023 11:23:06 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: K Prateek Nayak <kprateek.nayak@....com>
CC: Peter Zijlstra <peterz@...radead.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Ingo Molnar <mingo@...hat.com>,
"Vincent Guittot" <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>,
Tim Chen <tim.c.chen@...el.com>, Aaron Lu <aaron.lu@...el.com>,
"Dietmar Eggemann" <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
"Daniel Bristot de Oliveira" <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
<linux-kernel@...r.kernel.org>, Chen Yu <yu.chen.surf@...il.com>
Subject: Re: [PATCH 0/2] Introduce SIS_CACHE to choose previous CPU during
task wakeup
Hi Prateek,
On 2023-10-05 at 11:52:13 +0530, K Prateek Nayak wrote:
> Hello Chenyu,
>
> On 9/26/2023 10:40 AM, Chen Yu wrote:
> > RFC -> v1:
> > - drop RFC
> > - Only record the short sleeping time for each task, to better honor the
> > burst sleeping tasks. (Mathieu Desnoyers)
> > - Keep the forward movement monotonic for runqueue's cache-hot timeout value.
> > (Mathieu Desnoyers, Aaron Lu)
> > - Introduce a new helper function cache_hot_cpu() that considers
> > rq->cache_hot_timeout. (Aaron Lu)
> > - Add analysis of why inhibiting task migration could bring better throughput
> > for some benchmarks. (Gautham R. Shenoy)
> > - Choose the first cache-hot CPU, if all idle CPUs are cache-hot in
> > select_idle_cpu(). To avoid possible task stacking on the waker's CPU.
> > (K Prateek Nayak)
> >
> > Thanks for your comments and review!
>
> Sorry for the delay! I'll leave the test results from a 3rd Generation
> EPYC system below.
>
> tl;dr
>
> - Small regression in tbench and netperf possible due to more searching
> for an idle CPU.
>
> - Small regression in schbench (old) at 256 workers albeit with large
> run to run variance.
>
> - Other benchmarks are more or less same.
>
> Test : schbench
> Units : Normalized 99th percentile latency in us
> Interpretation: Lower is better
> Statistic : Median
> ==================================================================
> #workers: tip[pct imp](CV) SIS_CACHE[pct imp](CV)
> 1 1.00 [ -0.00]( 3.95) 0.97 [ 2.56](10.42)
> 2 1.00 [ -0.00]( 5.89) 0.83 [ 16.67](22.56)
> 4 1.00 [ -0.00](14.28) 1.00 [ -0.00](14.75)
> 8 1.00 [ -0.00]( 4.90) 0.84 [ 15.69]( 6.01)
> 16 1.00 [ -0.00]( 4.15) 1.00 [ -0.00]( 4.41)
> 32 1.00 [ -0.00]( 5.10) 1.01 [ -1.10]( 3.44)
> 64 1.00 [ -0.00]( 2.69) 1.04 [ -3.72]( 2.57)
> 128 1.00 [ -0.00]( 2.63) 0.94 [ 6.29]( 2.55)
> 256 1.00 [ -0.00](26.75) 1.51 [-50.57](11.40)
Thanks for the testing. So the latency regression from schbench is
quite obvious, and as you mentioned, it is possible due to longer
scan time during select_idle_cpu(). I'll run the same test with split
LLC to see if I can reproduce the issue or not.
I'm also working with Mathieu on another direction to choose previous CPU
over current CPU when the system is overloaded, and that should be
more moderate and I'll post the test result later.
thanks,
Chenyu
Powered by blists - more mailing lists