[<prev] [next>] [day] [month] [year] [list]
Message-ID: <202602101005.d6ed888c-lkp@intel.com>
Date: Tue, 10 Feb 2026 11:05:04 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>, <linux-kernel@...r.kernel.org>,
Juri Lelli <juri.lelli@...hat.com>, <aubrey.li@...ux.intel.com>,
<yu.c.chen@...el.com>, <oliver.sang@...el.com>
Subject: [linus:master] [sched/deadline] 1151354225:
stress-ng.sysbadaddr.ops_per_sec 16.9% regression
Hello,
kernel test robot noticed a 16.9% regression of stress-ng.sysbadaddr.ops_per_sec on:
commit: 115135422562e2f791e98a6f55ec57b2da3b3a95 ("sched/deadline: Fix 'stuck' dl_server")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
[still regression on linus/master e7aa57247700733e52a8e2e4dee6a52c2a76de02]
[still regression on linux-next/master 9845cf73f7db6094c0d8419d6adb848028f4a921]
testcase: stress-ng
config: x86_64-rhel-9.4
compiler: gcc-14
test machine: 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480CTDX (Sapphire Rapids) with 256G memory
parameters:
nr_threads: 100%
testtime: 60s
test: sysbadaddr
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@...el.com>
| Closes: https://lore.kernel.org/oe-lkp/202602101005.d6ed888c-lkp@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20260210/202602101005.d6ed888c-lkp@intel.com
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
gcc-14/performance/x86_64-rhel-9.4/100%/debian-13-x86_64-20250902.cgz/lkp-spr-r02/sysbadaddr/stress-ng/60s
commit:
v6.19-rc7
1151354225 ("sched/deadline: Fix 'stuck' dl_server")
v6.19-rc7 115135422562e2f791e98a6f55e
---------------- ---------------------------
%stddev %change %stddev
\ | \
89928 ± 7% -15.9% 75673 ± 11% numa-meminfo.node0.PageTables
1.052e+10 +5.4% 1.109e+10 cpuidle..time
20439789 -15.1% 17344810 cpuidle..usage
272197 +8.0% 293980 ± 2% meminfo.Mapped
171599 ± 3% -8.4% 157102 ± 3% meminfo.PageTables
56.50 ± 7% -16.0% 47.45 ± 9% vmstat.procs.r
402557 -20.4% 320282 vmstat.system.cs
570332 -19.2% 461095 ± 2% vmstat.system.in
0.55 -0.1 0.47 mpstat.cpu.all.irq%
0.35 -0.1 0.30 ± 2% mpstat.cpu.all.soft%
21.37 -3.2 18.20 ± 2% mpstat.cpu.all.sys%
1.19 -0.1 1.08 mpstat.cpu.all.usr%
47574572 ± 3% -15.1% 40407618 numa-numastat.node0.local_node
47713303 ± 3% -15.1% 40509520 numa-numastat.node0.numa_hit
46372877 ± 2% -15.6% 39140423 ± 3% numa-numastat.node1.local_node
46483726 ± 2% -15.5% 39282616 ± 3% numa-numastat.node1.numa_hit
22589 ± 7% -16.4% 18874 ± 11% numa-vmstat.node0.nr_page_table_pages
47714521 ± 3% -15.1% 40508990 numa-vmstat.node0.numa_hit
47575808 ± 3% -15.1% 40407088 numa-vmstat.node0.numa_local
46484027 ± 2% -15.5% 39282306 ± 3% numa-vmstat.node1.numa_hit
46373132 ± 2% -15.6% 39140113 ± 3% numa-vmstat.node1.numa_local
10441 ± 5% -49.8% 5240 ± 38% perf-c2c.DRAM.local
27988 -77.6% 6255 ± 89% perf-c2c.DRAM.remote
30307 -77.7% 6760 ± 89% perf-c2c.HITM.local
20500 -77.8% 4551 ± 91% perf-c2c.HITM.remote
50808 -77.7% 11311 ± 90% perf-c2c.HITM.total
11382479 -16.8% 9471558 ± 2% stress-ng.sysbadaddr.ops
189863 -16.9% 157870 ± 2% stress-ng.sysbadaddr.ops_per_sec
730453 -20.2% 583015 ± 2% stress-ng.time.involuntary_context_switches
1.462e+08 -16.4% 1.222e+08 ± 3% stress-ng.time.minor_page_faults
5002 -15.5% 4226 ± 2% stress-ng.time.percent_of_cpu_this_job_got
2450 -14.3% 2100 ± 2% stress-ng.time.system_time
558.73 -17.9% 458.84 ± 2% stress-ng.time.user_time
7545032 -19.3% 6088160 stress-ng.time.voluntary_context_switches
702.00 -15.4% 593.67 ± 2% turbostat.Avg_MHz
24.26 -3.8 20.46 ± 2% turbostat.Busy%
54.46 -8.2 46.28 turbostat.C1E%
21.40 +11.9 33.35 ± 3% turbostat.C6%
2.13 ± 5% +419.1% 11.07 ± 10% turbostat.CPU%c6
41047137 -18.9% 33275561 turbostat.IRQ
2895550 -26.4% 2130628 ± 13% turbostat.NMI
476.92 -4.9% 453.72 turbostat.PkgWatt
33.86 -8.1% 31.12 turbostat.RAMWatt
1261819 -1.0% 1249785 proc-vmstat.nr_file_pages
57044 -3.0% 55319 proc-vmstat.nr_kernel_stack
68078 +8.0% 73522 ± 2% proc-vmstat.nr_mapped
16.31 ± 35% +541.9% 104.69 ± 23% proc-vmstat.nr_mlock
42826 ± 3% -8.7% 39120 ± 3% proc-vmstat.nr_page_table_pages
336366 -3.6% 324322 proc-vmstat.nr_shmem
137143 -1.7% 134832 proc-vmstat.nr_slab_unreclaimable
94199637 ± 2% -15.3% 79794929 ± 2% proc-vmstat.numa_hit
93950007 ± 2% -15.3% 79550817 ± 2% proc-vmstat.numa_local
1.003e+08 ± 2% -15.3% 84985632 ± 2% proc-vmstat.pgalloc_normal
1.469e+08 -16.3% 1.23e+08 ± 3% proc-vmstat.pgfault
99165759 ± 2% -15.5% 83766426 ± 2% proc-vmstat.pgfree
87487 -16.7% 72919 ± 2% proc-vmstat.unevictable_pgs_culled
93994 -16.1% 78879 proc-vmstat.unevictable_pgs_mlocked
93984 -16.1% 78842 proc-vmstat.unevictable_pgs_munlocked
87454 -16.7% 72878 ± 2% proc-vmstat.unevictable_pgs_rescued
102462 -17.1% 84961 ± 2% sched_debug.cfs_rq:/.avg_vruntime.avg
67126 ± 2% -23.1% 51633 ± 4% sched_debug.cfs_rq:/.avg_vruntime.min
113508 ± 15% -27.4% 82463 ± 7% sched_debug.cfs_rq:/.avg_vruntime.stddev
90.19 ± 19% -67.4% 29.37 ± 84% sched_debug.cfs_rq:/.removed.load_avg.avg
250.01 ± 14% -46.2% 134.57 ± 47% sched_debug.cfs_rq:/.removed.load_avg.stddev
36.07 ± 21% -65.6% 12.39 ± 83% sched_debug.cfs_rq:/.removed.runnable_avg.avg
107.04 ± 16% -44.0% 59.99 ± 45% sched_debug.cfs_rq:/.removed.runnable_avg.stddev
36.08 ± 21% -65.8% 12.34 ± 83% sched_debug.cfs_rq:/.removed.util_avg.avg
107.00 ± 16% -44.1% 59.82 ± 45% sched_debug.cfs_rq:/.removed.util_avg.stddev
239.44 ± 4% -12.1% 210.47 ± 10% sched_debug.cfs_rq:/.runnable_avg.stddev
238.92 ± 4% -14.4% 204.47 ± 11% sched_debug.cfs_rq:/.util_avg.stddev
102432 -17.1% 84930 ± 2% sched_debug.cfs_rq:/.zero_vruntime.avg
67124 ± 2% -23.1% 51633 ± 4% sched_debug.cfs_rq:/.zero_vruntime.min
113448 ± 15% -27.3% 82455 ± 7% sched_debug.cfs_rq:/.zero_vruntime.stddev
249249 ± 13% -24.0% 189403 ± 20% sched_debug.cpu.avg_idle.stddev
1193041 -16.1% 1000474 ± 2% sched_debug.cpu.curr->pid.max
56973 -19.5% 45891 sched_debug.cpu.nr_switches.avg
75866 ± 2% -15.6% 64019 ± 3% sched_debug.cpu.nr_switches.max
6795 ± 5% -18.8% 5519 ± 6% sched_debug.cpu.nr_switches.stddev
0.01 ±118% +1691.7% 0.24 ± 69% sched_debug.cpu.nr_uninterruptible.avg
1.587e+10 -9.1% 1.442e+10 perf-stat.i.branch-instructions
1.412e+08 -8.0% 1.299e+08 perf-stat.i.branch-misses
4.647e+08 -9.8% 4.191e+08 ± 3% perf-stat.i.cache-misses
1.273e+09 -9.3% 1.155e+09 ± 2% perf-stat.i.cache-references
421189 -12.6% 367928 perf-stat.i.context-switches
1.601e+11 -9.3% 1.452e+11 perf-stat.i.cpu-cycles
19252 -10.8% 17178 perf-stat.i.cpu-migrations
347.41 +4.1% 361.51 ± 3% perf-stat.i.cycles-between-cache-misses
8.202e+10 -9.1% 7.453e+10 perf-stat.i.instructions
23.97 -10.7% 21.40 ± 2% perf-stat.i.metric.K/sec
2416028 -10.0% 2173992 ± 2% perf-stat.i.minor-faults
2532961 -10.0% 2278615 ± 2% perf-stat.i.page-faults
1.557e+10 -14.1% 1.338e+10 perf-stat.ps.branch-instructions
1.384e+08 -14.3% 1.185e+08 perf-stat.ps.branch-misses
4.554e+08 -15.8% 3.834e+08 ± 3% perf-stat.ps.cache-misses
1.248e+09 -14.8% 1.064e+09 ± 2% perf-stat.ps.cache-references
412177 -19.6% 331200 perf-stat.ps.context-switches
1.571e+11 -14.2% 1.349e+11 ± 2% perf-stat.ps.cpu-cycles
18872 -16.9% 15690 ± 2% perf-stat.ps.cpu-migrations
8.051e+10 -14.2% 6.911e+10 perf-stat.ps.instructions
2369827 -15.8% 1995000 ± 3% perf-stat.ps.minor-faults
2484512 -15.8% 2090911 ± 3% perf-stat.ps.page-faults
4.92e+12 -15.1% 4.178e+12 perf-stat.total.instructions
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Powered by blists - more mailing lists