[<prev] [next>] [day] [month] [year] [list]
Message-ID: <202212301548.8c6af9a0-yujie.liu@intel.com>
Date: Fri, 30 Dec 2022 15:43:19 +0800
From: kernel test robot <yujie.liu@...el.com>
To: Marco Elver <elver@...gle.com>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
Ian Rogers <irogers@...gle.com>,
<linux-kernel@...r.kernel.org>, <linux-perf-users@...r.kernel.org>,
<ying.huang@...el.com>, <feng.tang@...el.com>,
<zhengjun.xing@...ux.intel.com>, <fengwei.yin@...el.com>
Subject: [linus:master] [perf/hw_breakpoint] 0370dc314d:
stress-ng.kill.ops_per_sec 14.1% improvement
Greeting,
FYI, we noticed a 14.1% improvement of stress-ng.kill.ops_per_sec due to commit:
commit: 0370dc314df35579b751d1b77c9169f071444962 ("perf/hw_breakpoint: Optimize list of per-task breakpoints")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory
with following parameters:
nr_threads: 10%
disk: 1HDD
testtime: 60s
fs: ext4
class: os
test: kill
cpufreq_governor: performance
Details are as below:
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
os/gcc-11/performance/1HDD/ext4/x86_64-rhel-8.3/10%/debian-11.1-x86_64-20220510.cgz/lkp-csl-2sp7/kill/stress-ng/60s
commit:
089cdcb0cd ("perf/hw_breakpoint: Clean up headers")
0370dc314d ("perf/hw_breakpoint: Optimize list of per-task breakpoints")
089cdcb0cd1c2534 0370dc314df35579b751d1b77c9
---------------- ---------------------------
%stddev %change %stddev
\ | \
2025 +15.3% 2334 stress-ng.kill.kill_calls_per_sec
369933 +14.1% 422083 stress-ng.kill.ops
6165 +14.1% 7034 stress-ng.kill.ops_per_sec
2235 ± 6% +32.7% 2966 ± 3% stress-ng.time.involuntary_context_switches
722685 ± 2% +7.8% 779391 ± 2% stress-ng.time.voluntary_context_switches
846890 ± 12% +19.8% 1014212 ± 6% meminfo.DirectMap4k
24473 ± 2% +7.1% 26207 ± 2% vmstat.system.cs
3.44 ± 4% -1.9 1.59 ± 6% perf-profile.calltrace.cycles-pp.aa_may_signal.apparmor_task_kill.security_task_kill.kill_something_info.__x64_sys_kill
0.20 ±122% +0.4 0.55 ± 5% perf-profile.calltrace.cycles-pp.check_kill_permission.kill_something_info.__x64_sys_kill.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.48 ± 4% -1.9 1.62 ± 6% perf-profile.children.cycles-pp.aa_may_signal
0.08 ± 9% +0.0 0.10 ± 10% perf-profile.children.cycles-pp.audit_signal_info
0.13 ± 11% +0.0 0.15 ± 4% perf-profile.children.cycles-pp.apparmor_capable
0.09 ± 37% +0.1 0.14 ± 22% perf-profile.children.cycles-pp.tick_sched_do_timer
0.51 ± 9% +0.1 0.60 ± 5% perf-profile.children.cycles-pp.check_kill_permission
3.46 ± 5% -1.9 1.60 ± 6% perf-profile.self.cycles-pp.aa_may_signal
0.13 ± 10% +0.0 0.15 ± 4% perf-profile.self.cycles-pp.apparmor_capable
0.16 ± 12% +0.0 0.19 ± 9% perf-profile.self.cycles-pp.check_kill_permission
0.12 ± 11% +0.0 0.14 ± 8% perf-profile.self.cycles-pp.kill_something_info
31.01 -33.4% 20.65 ± 2% perf-stat.i.MPKI
6.476e+08 +9.3% 7.079e+08 perf-stat.i.branch-instructions
25.80 ± 3% -8.3 17.52 perf-stat.i.cache-miss-rate%
24950550 ± 3% -50.4% 12381137 ± 2% perf-stat.i.cache-misses
95492313 -26.5% 70215337 perf-stat.i.cache-references
25172 ± 2% +7.4% 27042 perf-stat.i.context-switches
8.90 -9.1% 8.09 perf-stat.i.cpi
1137 ± 2% +98.4% 2257 ± 3% perf-stat.i.cycles-between-cache-misses
9.911e+08 +9.9% 1.089e+09 perf-stat.i.dTLB-loads
5.059e+08 +10.3% 5.578e+08 perf-stat.i.dTLB-stores
35.29 +1.1 36.42 perf-stat.i.iTLB-load-miss-rate%
1172159 +6.0% 1242747 perf-stat.i.iTLB-load-misses
3.273e+09 +9.4% 3.582e+09 perf-stat.i.instructions
2839 +3.3% 2933 perf-stat.i.instructions-per-iTLB-miss
0.14 +8.2% 0.15 perf-stat.i.ipc
298.28 ± 71% +176.8% 825.53 perf-stat.i.metric.K/sec
23.13 ± 2% +6.0% 24.52 perf-stat.i.metric.M/sec
3366282 ± 3% -15.4% 2848063 perf-stat.i.node-load-misses
429795 ± 7% -20.9% 340164 ± 6% perf-stat.i.node-loads
3305245 +11.6% 3687084 perf-stat.i.node-store-misses
29.20 -32.8% 19.62 perf-stat.overall.MPKI
1.11 ± 2% -0.1 1.04 ± 2% perf-stat.overall.branch-miss-rate%
26.13 ± 2% -8.5 17.64 perf-stat.overall.cache-miss-rate%
8.38 -8.2% 7.69 perf-stat.overall.cpi
1099 ± 3% +102.2% 2223 ± 2% perf-stat.overall.cycles-between-cache-misses
35.32 +1.1 36.44 perf-stat.overall.iTLB-load-miss-rate%
2791 ± 2% +3.2% 2880 perf-stat.overall.instructions-per-iTLB-miss
0.12 +9.0% 0.13 perf-stat.overall.ipc
6.369e+08 +9.3% 6.962e+08 perf-stat.ps.branch-instructions
24566232 ± 3% -50.4% 12190440 ± 2% perf-stat.ps.cache-misses
94005574 -26.5% 69121490 perf-stat.ps.cache-references
24783 ± 2% +7.4% 26622 perf-stat.ps.context-switches
9.751e+08 +9.9% 1.071e+09 perf-stat.ps.dTLB-loads
4.979e+08 +10.2% 5.489e+08 perf-stat.ps.dTLB-stores
1153650 +6.0% 1222951 perf-stat.ps.iTLB-load-misses
3.219e+09 +9.4% 3.523e+09 perf-stat.ps.instructions
3314459 ± 3% -15.4% 2804239 perf-stat.ps.node-load-misses
423139 ± 7% -20.8% 334925 ± 6% perf-stat.ps.node-loads
3254493 +11.5% 3630306 perf-stat.ps.node-store-misses
2.05e+11 +9.1% 2.236e+11 perf-stat.total.instructions
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests
View attachment "config-6.0.0-rc2-00021-g0370dc314df3" of type "text/plain" (164374 bytes)
View attachment "job-script" of type "text/plain" (8442 bytes)
View attachment "job.yaml" of type "text/plain" (5711 bytes)
View attachment "reproduce" of type "text/plain" (532 bytes)
Powered by blists - more mailing lists