[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250520094538.086709102@infradead.org>
Date: Tue, 20 May 2025 11:45:38 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...hat.com,
juri.lelli@...hat.com,
vincent.guittot@...aro.org,
dietmar.eggemann@....com,
rostedt@...dmis.org,
bsegall@...gle.com,
mgorman@...e.de,
vschneid@...hat.com,
clm@...a.com
Cc: linux-kernel@...r.kernel.org,
peterz@...radead.org
Subject: [RFC][PATCH 0/5] sched: Try and address some recent-ish regressions
Hi!
So Chris poked me about how they're having a wee performance drop after around
6.11. He's extended his schbench tool to mimic the workload in question.
Specifically the commandline given:
schbench -L -m 4 -M auto -t 128 -n 0 -r 60
This benchmark wants to stay on a single (large) LLC (Chris, perhaps add an
option to start the CPU mask with
/sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list or something). Both
the machine Chris has (SKL, 20+ cores per LLC) and the machines I ran this on
(SKL,SPR 20+ cores) are Intel, AMD has smaller LLC and the problem wasn't as
pronounced there.
Use performance CPU governor (as always when benchmarking). Also, if the test
results are unstable as all heck, disable turbo.
After a fair amount of tinkering I managed to reproduce on my SPR and Thomas'
SKL. The SKL would only give usable numbers with the second socket offline and
turbo disabled -- YMMV.
Chris further provided a bisect into the DELAY_DEQUEUE patches and a bisect
leading to commit 5f6bd380c7bd ("sched/rt: Remove default bandwidth control")
-- which enables the dl_server by default.
SKL (performance, no_turbo):
schbench-6.9.0-1.txt:average rps: 2040360.55
schbench-6.9.0-2.txt:average rps: 2038846.78
schbench-6.9.0-3.txt:average rps: 2037892.28
schbench-6.15.0-rc6+-1.txt:average rps: 1907718.18
schbench-6.15.0-rc6+-2.txt:average rps: 1906931.07
schbench-6.15.0-rc6+-3.txt:average rps: 1903190.38
schbench-6.15.0-rc6+-dirty-1.txt:average rps: 2002224.78
schbench-6.15.0-rc6+-dirty-2.txt:average rps: 2007116.80
schbench-6.15.0-rc6+-dirty-3.txt:average rps: 2005294.57
schbench-6.15.0-rc6+-dirty-delayed-1.txt:average rps: 2011282.15
schbench-6.15.0-rc6+-dirty-delayed-2.txt:average rps: 2016347.10
schbench-6.15.0-rc6+-dirty-delayed-3.txt:average rps: 2014515.47
schbench-6.15.0-rc6+-dirty-delayed-default-1.txt:average rps: 2042169.00
schbench-6.15.0-rc6+-dirty-delayed-default-2.txt:average rps: 2032789.77
schbench-6.15.0-rc6+-dirty-delayed-default-3.txt:average rps: 2040313.95
SPR (performance):
schbench-6.9.0-1.txt:average rps: 2975450.75
schbench-6.9.0-2.txt:average rps: 2975464.38
schbench-6.9.0-3.txt:average rps: 2974881.02
schbench-6.15.0-rc6+-1.txt:average rps: 2882537.37
schbench-6.15.0-rc6+-2.txt:average rps: 2881658.70
schbench-6.15.0-rc6+-3.txt:average rps: 2884293.37
schbench-6.15.0-rc6+-dl_server-1.txt:average rps: 2924423.18
schbench-6.15.0-rc6+-dl_server-2.txt:average rps: 2920422.63
schbench-6.15.0-rc6+-dirty-1.txt:average rps: 3011540.97
schbench-6.15.0-rc6+-dirty-2.txt:average rps: 3010124.10
schbench-6.15.0-rc6+-dirty-delayed-1.txt:average rps: 3030883.15
schbench-6.15.0-rc6+-dirty-delayed-2.txt:average rps: 3031627.05
schbench-6.15.0-rc6+-dirty-delayed-default-1.txt:average rps: 3053005.98
schbench-6.15.0-rc6+-dirty-delayed-default-2.txt:average rps: 3052972.80
As can be seen, the SPR is much easier to please than the SKL for whatever
reason. I'm thinking we can make TTWU_QUEUE_DELAYED default on, but I suspect
TTWU_QUEUE_DEFAULT might be a harder sell -- we'd need to run more than this
one benchmark.
Anyway, the patches are stable (finally!, I hope, knock on wood) but in a
somewhat rough state. At the very least the last patch is missing ttwu_stat(),
still need to figure out how to account it ;-)
Chris, I'm hoping your machine will agree with these numbers; it hasn't been
straight sailing in that regard.
Powered by blists - more mailing lists