[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250530090032.GA21197@noisy.programming.kicks-ass.net>
Date: Fri, 30 May 2025 11:00:32 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Beata Michalska <beata.michalska@....com>
Cc: mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, clm@...a.com,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 0/5] sched: Try and address some recent-ish
regressions
On Thu, May 29, 2025 at 12:18:54PM +0200, Beata Michalska wrote:
> On Wed, May 28, 2025 at 09:59:44PM +0200, Peter Zijlstra wrote:
> > On Tue, May 20, 2025 at 11:45:38AM +0200, Peter Zijlstra wrote:
> >
> > > Anyway, the patches are stable (finally!, I hope, knock on wood) but in a
> > > somewhat rough state. At the very least the last patch is missing ttwu_stat(),
> > > still need to figure out how to account it ;-)
> > >
> > > Chris, I'm hoping your machine will agree with these numbers; it hasn't been
> > > straight sailing in that regard.
> >
> > Anybody? -- If no comments I'll just stick them in sched/core or so.
> >
> Hi Peter,
>
> I've tried out your series on top of 6.15 on an Ampere Altra Mt Jade
> dual-socket (160-core) system, which enables SCHED_CLUSTER (2-core MC domains).
Ah, that's a radically different system than what we set out with. Good
to get some feedback on that indeed.
> Sharing preliminary test results of 50 runs per setup as, so far, the data
> show quite a bit of run-to-run variability - not sure how useful those will be.
Yeah, I had some of that on the Skylake system, I had to disable turbo
for the numbers to become stable enough to say anything much.
> At this point without any deep dive, which is probably needed and hopefully
> will come later on.
>
>
> Results for average rps (60s) sorted based on P90
>
> CFG | min | max | stdev | 90th
> ----+------------+------------+------------+-----------
> 1 | 704577.50 | 942665.67 | 46439.49 | 891272.09
> 2 | 647163.57 | 815392.65 | 35559.98 | 783884.00
> 3 | 658665.75 | 859520.32 | 50257.35 | 832174.80
> 4 | 656313.48 | 877223.85 | 47871.43 | 837693.28
> 5 | 630419.62 | 842170.47 | 47267.52 | 815911.81
>
> Legend:
> #1 : kernel 6.9
> #2 : kernel 6.15
> #3 : kernel 6.15 patched def (TTWU_QUEUE_ON_CPU + NO_TTWU_QUEUE_DEFAULT)
> #4 : kernel 6.15 patched + TTWU_QUEUE_ON_CPU + TTWU_QUEUE_DEFAULT
> #5 : kernel 6.15 patched + NO_TTWU_QUEUE_ON_CPU + NO_TTWU_QUEUE_DEFAULT
Right, minor improvement. At least its not making it worse :-)
The new toy is TTWU_QUEUE_DELAYED, and yeah, I did notice that disabling
TTWU_QUEUE_ON_CPU was a bad idea.
Powered by blists - more mailing lists