[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200701091943.GC3874@shao2-debian>
Date: Wed, 1 Jul 2020 17:19:43 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: Christoph Hellwig <hch@....de>
Cc: Al Viro <viro@...iv.linux.org.uk>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ian Kent <raven@...maw.net>,
David Howells <dhowells@...hat.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-security-module@...r.kernel.org,
netfilter-devel@...r.kernel.org, lkp@...ts.01.org
Subject: [fs] 140402bab8: stress-ng.splice.ops_per_sec -100.0% regression
Greeting,
FYI, we noticed a -100.0% regression of stress-ng.splice.ops_per_sec due to commit:
commit: 140402bab86b6c4c8c01e5a0e2015bcd96ddb072 ("[PATCH 13/14] fs: implement default_file_splice_read using __kernel_read")
url: https://github.com/0day-ci/linux/commits/Christoph-Hellwig/cachefiles-switch-to-kernel_write/20200625-011606
in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
with following parameters:
nr_threads: 100%
disk: 1HDD
testtime: 30s
class: pipe
cpufreq_governor: performance
ucode: 0x5002f01
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen@...el.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
pipe/gcc-9/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-20191114.cgz/lkp-csl-2sp6/stress-ng/30s/0x5002f01
commit:
169690acd1 ("fs: remove __vfs_read")
140402bab8 ("fs: implement default_file_splice_read using __kernel_read")
169690acd1ebc195 140402bab86b6c4c8c01e5a0e20
---------------- ---------------------------
%stddev %change %stddev
\ | \
1.89e+09 -100.0% 0.00 stress-ng.splice.ops
62985321 -100.0% 0.00 stress-ng.splice.ops_per_sec
9102 -15.0% 7737 stress-ng.time.percent_of_cpu_this_job_got
17147 -15.2% 14544 stress-ng.time.system_time
2043 -13.5% 1767 ± 2% stress-ng.time.user_time
5.49 +14.2 19.74 mpstat.cpu.all.idle%
84.45 -12.9 71.56 mpstat.cpu.all.sys%
10.06 -1.4 8.70 ± 2% mpstat.cpu.all.usr%
8516257 -9.0% 7747908 sched_debug.cfs_rq:/.min_vruntime.avg
7791989 -8.0% 7166208 sched_debug.cfs_rq:/.min_vruntime.min
8.72 ± 14% -25.3% 6.52 ± 11% sched_debug.cfs_rq:/.nr_spread_over.stddev
6.00 +233.3% 20.00 vmstat.cpu.id
83.00 -15.1% 70.50 vmstat.cpu.sy
9.25 ± 4% -13.5% 8.00 vmstat.cpu.us
1.254e+09 -76.7% 2.928e+08 ± 10% numa-numastat.node0.local_node
1.254e+09 -76.7% 2.928e+08 ± 10% numa-numastat.node0.numa_hit
1.258e+09 -73.9% 3.281e+08 ± 14% numa-numastat.node1.local_node
1.258e+09 -73.9% 3.281e+08 ± 14% numa-numastat.node1.numa_hit
5.315e+08 -71.6% 1.511e+08 ± 16% numa-vmstat.node0.numa_hit
5.314e+08 -71.6% 1.51e+08 ± 16% numa-vmstat.node0.numa_local
5.333e+08 ± 2% -66.1% 1.808e+08 ± 20% numa-vmstat.node1.numa_hit
5.332e+08 ± 2% -66.1% 1.807e+08 ± 20% numa-vmstat.node1.numa_local
2.512e+09 -75.3% 6.209e+08 ± 2% proc-vmstat.numa_hit
2.512e+09 -75.3% 6.209e+08 ± 2% proc-vmstat.numa_local
2.513e+09 -75.3% 6.211e+08 ± 2% proc-vmstat.pgalloc_normal
2.513e+09 -75.3% 6.21e+08 ± 2% proc-vmstat.pgfree
11043 ± 6% +37.1% 15145 ± 17% interrupts.CPU16.RES:Rescheduling_interrupts
10433 ± 9% +90.8% 19904 ± 34% interrupts.CPU19.RES:Rescheduling_interrupts
10912 ± 7% +35.6% 14793 ± 18% interrupts.CPU20.RES:Rescheduling_interrupts
10149 ± 10% +89.2% 19199 ± 46% interrupts.CPU21.RES:Rescheduling_interrupts
25107 ± 39% -57.2% 10736 ± 7% interrupts.CPU41.RES:Rescheduling_interrupts
10090 ± 3% +35.8% 13704 ± 22% interrupts.CPU50.RES:Rescheduling_interrupts
6.36 +220.4% 20.38 iostat.cpu.idle
83.67 -15.2% 70.98 iostat.cpu.system
9.97 -13.3% 8.64 ± 2% iostat.cpu.user
3.18 ±100% -100.0% 0.00 iostat.sdb.await.max
3.18 ±100% -100.0% 0.00 iostat.sdb.r_await.max
2.00 ±100% -100.0% 0.00 iostat.sdb.svctm.max
1.15 ± 18% -0.6 0.55 ± 65% perf-profile.calltrace.cycles-pp.secondary_startup_64
1.13 ± 18% -0.6 0.54 ± 65% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
1.13 ± 18% -0.6 0.54 ± 65% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
1.12 ± 18% -0.6 0.54 ± 65% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
11.24 -0.3 10.94 perf-profile.calltrace.cycles-pp.do_select.core_sys_select.kern_select.__x64_sys_select.do_syscall_64
0.86 ± 5% +0.1 0.93 ± 5% perf-profile.calltrace.cycles-pp.common_file_perm.security_file_permission.vfs_read.ksys_read.do_syscall_64
1.20 ± 7% +0.2 1.35 ± 8% perf-profile.calltrace.cycles-pp.ktime_get_ts64.poll_select_finish.kern_select.__x64_sys_select.do_syscall_64
1.27 ± 27% -0.6 0.70 ± 49% perf-profile.children.cycles-pp._raw_spin_lock
1.15 ± 18% -0.5 0.62 ± 42% perf-profile.children.cycles-pp.secondary_startup_64
1.15 ± 18% -0.5 0.62 ± 42% perf-profile.children.cycles-pp.cpu_startup_entry
1.13 ± 18% -0.5 0.61 ± 42% perf-profile.children.cycles-pp.start_secondary
1.14 ± 18% -0.5 0.62 ± 41% perf-profile.children.cycles-pp.do_idle
2.79 ± 8% -0.3 2.47 ± 3% perf-profile.children.cycles-pp.__schedule
0.64 ± 23% -0.3 0.34 ± 46% perf-profile.children.cycles-pp.schedule_idle
11.35 -0.3 11.06 perf-profile.children.cycles-pp.do_select
0.42 ± 17% -0.2 0.22 ± 40% perf-profile.children.cycles-pp.cpuidle_enter
0.42 ± 17% -0.2 0.22 ± 40% perf-profile.children.cycles-pp.cpuidle_enter_state
0.36 ± 19% -0.2 0.19 ± 43% perf-profile.children.cycles-pp.intel_idle
0.51 ± 19% -0.2 0.35 ± 25% perf-profile.children.cycles-pp.__mutex_lock
0.19 ± 5% +0.0 0.22 ± 6% perf-profile.children.cycles-pp.memset
0.16 ± 4% +0.0 0.19 ± 3% perf-profile.children.cycles-pp.__x64_sys_write
0.34 ± 6% +0.0 0.38 ± 5% perf-profile.children.cycles-pp.aa_file_perm
2.83 ± 6% +0.3 3.17 ± 7% perf-profile.children.cycles-pp.memset_erms
3.74 ± 8% +0.5 4.27 ± 8% perf-profile.children.cycles-pp.ktime_get_ts64
0.36 ± 19% -0.2 0.19 ± 43% perf-profile.self.cycles-pp.intel_idle
0.15 ± 4% +0.0 0.17 ± 4% perf-profile.self.cycles-pp.__x64_sys_write
0.18 ± 5% +0.0 0.21 ± 8% perf-profile.self.cycles-pp._copy_to_user
0.31 ± 6% +0.0 0.34 ± 5% perf-profile.self.cycles-pp.aa_file_perm
0.30 ± 5% +0.0 0.33 ± 5% perf-profile.self.cycles-pp.ksys_write
0.64 ± 6% +0.1 0.72 ± 8% perf-profile.self.cycles-pp.new_sync_write
0.93 ± 6% +0.1 1.04 ± 10% perf-profile.self.cycles-pp.__select
2.74 ± 6% +0.3 3.06 ± 7% perf-profile.self.cycles-pp.memset_erms
7.73 +77.8% 13.74 ± 53% perf-stat.i.MPKI
4.185e+10 -15.3% 3.546e+10 perf-stat.i.branch-instructions
0.66 +0.5 1.19 ± 56% perf-stat.i.branch-miss-rate%
2.242e+08 -19.5% 1.804e+08 perf-stat.i.branch-misses
10.61 ± 16% -2.2 8.40 ± 11% perf-stat.i.cache-miss-rate%
1.71 +31.9% 2.25 ± 14% perf-stat.i.cpi
2.363e+11 -14.0% 2.032e+11 perf-stat.i.cpu-cycles
5.255e+10 -18.9% 4.26e+10 perf-stat.i.dTLB-loads
3.183e+10 -18.9% 2.58e+10 perf-stat.i.dTLB-stores
91.58 -5.8 85.80 ± 2% perf-stat.i.iTLB-load-miss-rate%
1.693e+08 ± 4% -26.3% 1.247e+08 perf-stat.i.iTLB-load-misses
2.033e+11 -16.9% 1.69e+11 perf-stat.i.instructions
0.85 -13.0% 0.74 perf-stat.i.ipc
2.46 -14.0% 2.12 perf-stat.i.metric.GHz
0.69 ± 3% +11.4% 0.77 ± 4% perf-stat.i.metric.K/sec
1320 -17.7% 1087 perf-stat.i.metric.M/sec
84.92 -1.5 83.37 perf-stat.i.node-load-miss-rate%
5508205 ± 10% -12.3% 4829049 ± 7% perf-stat.i.node-load-misses
88.44 -1.8 86.60 perf-stat.i.node-store-miss-rate%
2.74 +19.7% 3.28 ± 2% perf-stat.overall.MPKI
0.54 -0.0 0.51 perf-stat.overall.branch-miss-rate%
1.17 +3.6% 1.21 perf-stat.overall.cpi
94.99 -1.6 93.40 perf-stat.overall.iTLB-load-miss-rate%
1198 ± 4% +12.1% 1343 perf-stat.overall.instructions-per-iTLB-miss
0.85 -3.5% 0.82 perf-stat.overall.ipc
76.33 -2.1 74.22 perf-stat.overall.node-store-miss-rate%
4.138e+10 -15.2% 3.51e+10 perf-stat.ps.branch-instructions
2.225e+08 -19.2% 1.798e+08 perf-stat.ps.branch-misses
2.354e+11 -13.7% 2.031e+11 perf-stat.ps.cpu-cycles
5.2e+10 -18.8% 4.225e+10 perf-stat.ps.dTLB-loads
3.148e+10 -18.8% 2.557e+10 perf-stat.ps.dTLB-stores
1.681e+08 ± 4% -25.9% 1.246e+08 perf-stat.ps.iTLB-load-misses
2.01e+11 -16.7% 1.674e+11 perf-stat.ps.instructions
5501189 ± 10% -12.1% 4834197 ± 7% perf-stat.ps.node-load-misses
4.253e+13 -16.5% 3.549e+13 perf-stat.total.instructions
12584 +22.9% 15464 ± 4% softirqs.CPU0.RCU
24160 ± 3% +16.8% 28229 ± 4% softirqs.CPU0.SCHED
12597 ± 3% +15.3% 14523 ± 6% softirqs.CPU1.RCU
22519 ± 2% +17.8% 26528 softirqs.CPU1.SCHED
11499 ± 2% +25.3% 14408 ± 4% softirqs.CPU10.RCU
22544 ± 2% +14.7% 25856 ± 3% softirqs.CPU10.SCHED
11857 ± 4% +18.4% 14035 ± 4% softirqs.CPU11.RCU
22508 ± 4% +13.3% 25509 softirqs.CPU11.SCHED
11755 ± 3% +20.4% 14150 ± 3% softirqs.CPU12.RCU
22168 ± 2% +15.5% 25610 softirqs.CPU12.SCHED
11532 ± 2% +20.7% 13919 ± 3% softirqs.CPU13.RCU
22433 ± 2% +12.5% 25236 ± 2% softirqs.CPU13.SCHED
11692 ± 4% +20.5% 14094 ± 3% softirqs.CPU14.RCU
21907 ± 2% +16.2% 25460 ± 2% softirqs.CPU14.SCHED
11584 ± 3% +21.1% 14029 ± 3% softirqs.CPU15.RCU
22116 +15.1% 25446 softirqs.CPU15.SCHED
12117 ± 4% +18.5% 14353 ± 4% softirqs.CPU16.RCU
21918 +18.2% 25904 ± 4% softirqs.CPU16.SCHED
12337 ± 7% +19.7% 14768 ± 3% softirqs.CPU17.RCU
21922 ± 3% +19.1% 26107 ± 2% softirqs.CPU17.SCHED
12117 ± 3% +19.5% 14486 ± 3% softirqs.CPU18.RCU
22208 ± 2% +15.2% 25586 ± 2% softirqs.CPU18.SCHED
12123 ± 6% +18.9% 14413 ± 3% softirqs.CPU19.RCU
21757 +16.8% 25405 ± 2% softirqs.CPU19.SCHED
11831 ± 6% +21.1% 14326 ± 3% softirqs.CPU2.RCU
22280 ± 3% +16.3% 25912 softirqs.CPU2.SCHED
12069 ± 3% +19.7% 14449 ± 3% softirqs.CPU20.RCU
22370 +14.0% 25505 ± 2% softirqs.CPU20.SCHED
12133 ± 4% +21.1% 14689 ± 5% softirqs.CPU21.RCU
21765 ± 2% +16.4% 25326 ± 3% softirqs.CPU21.SCHED
12077 ± 4% +20.8% 14584 ± 3% softirqs.CPU22.RCU
22162 ± 3% +11.5% 24706 ± 2% softirqs.CPU22.SCHED
11955 ± 3% +21.8% 14565 ± 3% softirqs.CPU23.RCU
21916 ± 2% +15.5% 25307 softirqs.CPU23.SCHED
12312 ± 3% +19.6% 14720 ± 4% softirqs.CPU24.RCU
12236 ± 3% +18.6% 14517 ± 3% softirqs.CPU25.RCU
12395 ± 5% +16.4% 14426 ± 3% softirqs.CPU26.RCU
12922 ± 15% +29.0% 16669 ± 13% softirqs.CPU27.RCU
11799 ± 2% +19.8% 14135 softirqs.CPU29.RCU
20734 ± 3% +16.6% 24174 ± 6% softirqs.CPU29.SCHED
12194 ± 7% +19.2% 14530 ± 4% softirqs.CPU3.RCU
21830 +16.2% 25361 ± 2% softirqs.CPU3.SCHED
11896 ± 2% +20.7% 14364 ± 2% softirqs.CPU30.RCU
11626 ± 2% +25.0% 14534 ± 2% softirqs.CPU31.RCU
11544 ± 2% +20.4% 13902 ± 2% softirqs.CPU32.RCU
11774 ± 4% +21.2% 14266 softirqs.CPU33.RCU
11779 ± 3% +20.1% 14147 softirqs.CPU34.RCU
21020 +12.3% 23615 ± 6% softirqs.CPU34.SCHED
11503 ± 2% +22.5% 14088 ± 2% softirqs.CPU35.RCU
11414 ± 3% +22.8% 14016 softirqs.CPU36.RCU
11400 ± 3% +22.7% 13982 ± 2% softirqs.CPU37.RCU
11505 ± 3% +23.8% 14242 ± 2% softirqs.CPU38.RCU
11547 ± 3% +21.8% 14069 ± 3% softirqs.CPU39.RCU
11800 +19.1% 14055 ± 3% softirqs.CPU4.RCU
22611 ± 3% +12.9% 25530 ± 2% softirqs.CPU4.SCHED
11467 ± 2% +23.6% 14171 ± 2% softirqs.CPU40.RCU
11521 ± 2% +22.7% 14131 softirqs.CPU41.RCU
11375 ± 2% +22.5% 13935 ± 3% softirqs.CPU42.RCU
11383 ± 2% +23.1% 14009 softirqs.CPU43.RCU
11315 ± 3% +22.2% 13828 ± 2% softirqs.CPU44.RCU
11340 ± 2% +24.9% 14166 ± 2% softirqs.CPU45.RCU
11478 ± 2% +22.5% 14060 softirqs.CPU46.RCU
20831 +14.1% 23759 ± 6% softirqs.CPU46.SCHED
11438 ± 2% +22.9% 14059 softirqs.CPU47.RCU
20926 +15.9% 24244 ± 6% softirqs.CPU47.SCHED
11478 +20.8% 13863 ± 3% softirqs.CPU48.RCU
21682 ± 3% +17.6% 25488 ± 2% softirqs.CPU48.SCHED
11517 ± 3% +18.9% 13695 ± 2% softirqs.CPU49.RCU
21961 ± 2% +17.6% 25832 ± 3% softirqs.CPU49.SCHED
22045 ± 3% +15.0% 25359 ± 2% softirqs.CPU5.SCHED
11718 ± 5% +21.0% 14175 ± 7% softirqs.CPU50.RCU
22354 ± 3% +16.5% 26036 ± 3% softirqs.CPU50.SCHED
11368 ± 3% +18.6% 13487 ± 5% softirqs.CPU51.RCU
22012 ± 3% +17.0% 25752 ± 2% softirqs.CPU51.SCHED
11263 ± 2% +22.8% 13825 ± 4% softirqs.CPU52.RCU
21846 ± 2% +18.2% 25823 ± 3% softirqs.CPU52.SCHED
11630 ± 2% +18.7% 13811 ± 5% softirqs.CPU53.RCU
22115 +15.7% 25589 ± 2% softirqs.CPU53.SCHED
11435 ± 3% +20.9% 13825 ± 3% softirqs.CPU54.RCU
21666 +17.3% 25408 softirqs.CPU54.SCHED
11341 ± 2% +21.6% 13794 ± 3% softirqs.CPU55.RCU
21801 ± 2% +17.8% 25674 ± 2% softirqs.CPU55.SCHED
11352 ± 3% +20.8% 13716 ± 3% softirqs.CPU56.RCU
21613 ± 3% +15.4% 24952 ± 4% softirqs.CPU56.SCHED
11385 ± 3% +21.2% 13795 ± 3% softirqs.CPU57.RCU
22139 ± 2% +16.6% 25819 ± 3% softirqs.CPU57.SCHED
11338 ± 3% +21.0% 13719 ± 3% softirqs.CPU58.RCU
22006 +16.3% 25600 ± 2% softirqs.CPU58.SCHED
11382 ± 2% +20.7% 13737 ± 3% softirqs.CPU59.RCU
22231 ± 2% +15.4% 25664 softirqs.CPU59.SCHED
11729 +23.1% 14443 ± 5% softirqs.CPU6.RCU
22023 ± 3% +14.2% 25145 ± 4% softirqs.CPU6.SCHED
11450 ± 2% +21.9% 13961 ± 2% softirqs.CPU60.RCU
22174 ± 4% +15.7% 25653 ± 2% softirqs.CPU60.SCHED
11297 ± 2% +22.6% 13848 ± 2% softirqs.CPU61.RCU
21550 ± 2% +15.6% 24909 ± 2% softirqs.CPU61.SCHED
11232 ± 2% +21.3% 13620 ± 3% softirqs.CPU62.RCU
22190 +15.2% 25560 ± 2% softirqs.CPU62.SCHED
11528 ± 2% +19.8% 13810 ± 2% softirqs.CPU63.RCU
22068 +17.2% 25860 ± 2% softirqs.CPU63.SCHED
11793 ± 4% +20.9% 14262 ± 3% softirqs.CPU64.RCU
22102 +16.1% 25660 ± 2% softirqs.CPU64.SCHED
12067 ± 4% +18.0% 14244 ± 4% softirqs.CPU65.RCU
22085 ± 2% +15.5% 25515 softirqs.CPU65.SCHED
11878 ± 3% +21.8% 14471 ± 4% softirqs.CPU66.RCU
21855 ± 2% +13.2% 24746 ± 4% softirqs.CPU66.SCHED
11723 ± 3% +23.1% 14435 ± 5% softirqs.CPU67.RCU
22581 ± 8% +13.1% 25541 softirqs.CPU67.SCHED
11796 ± 2% +22.1% 14406 ± 4% softirqs.CPU68.RCU
22195 ± 3% +14.6% 25435 softirqs.CPU68.SCHED
11690 ± 3% +22.6% 14337 ± 3% softirqs.CPU69.RCU
21531 ± 2% +20.6% 25957 ± 2% softirqs.CPU69.SCHED
11562 ± 3% +21.2% 14014 ± 5% softirqs.CPU7.RCU
22170 +14.6% 25414 ± 2% softirqs.CPU7.SCHED
11710 ± 3% +21.8% 14263 ± 3% softirqs.CPU70.RCU
22245 ± 3% +15.0% 25578 ± 3% softirqs.CPU70.SCHED
11991 ± 3% +18.7% 14238 ± 4% softirqs.CPU71.RCU
22094 ± 2% +15.1% 25431 ± 2% softirqs.CPU71.SCHED
12079 ± 3% +18.4% 14299 ± 6% softirqs.CPU72.RCU
11760 ± 3% +23.7% 14550 ± 4% softirqs.CPU73.RCU
21025 ± 2% +16.8% 24556 ± 7% softirqs.CPU73.SCHED
11820 ± 5% +21.1% 14317 ± 3% softirqs.CPU74.RCU
11770 ± 4% +18.7% 13975 ± 2% softirqs.CPU75.RCU
11306 ± 5% +23.7% 13982 ± 3% softirqs.CPU76.RCU
11588 ± 3% +20.9% 14008 softirqs.CPU77.RCU
11372 ± 2% +22.8% 13962 ± 2% softirqs.CPU78.RCU
11721 ± 4% +19.9% 14048 ± 4% softirqs.CPU8.RCU
22272 +12.5% 25048 ± 4% softirqs.CPU8.SCHED
11230 ± 2% +24.0% 13931 softirqs.CPU80.RCU
11404 ± 2% +20.3% 13725 ± 2% softirqs.CPU81.RCU
20985 ± 2% +14.8% 24100 ± 6% softirqs.CPU81.SCHED
11173 ± 2% +24.2% 13878 softirqs.CPU82.RCU
20795 ± 2% +17.4% 24408 ± 7% softirqs.CPU82.SCHED
11254 +22.7% 13814 softirqs.CPU83.RCU
20744 +15.7% 23992 ± 7% softirqs.CPU83.SCHED
11314 +21.1% 13696 ± 4% softirqs.CPU84.RCU
11319 ± 2% +23.3% 13960 softirqs.CPU85.RCU
11200 ± 2% +23.7% 13854 softirqs.CPU86.RCU
11360 ± 3% +20.2% 13652 softirqs.CPU87.RCU
11165 ± 2% +24.5% 13898 softirqs.CPU88.RCU
20965 +16.0% 24319 ± 6% softirqs.CPU88.SCHED
11149 ± 3% +24.2% 13843 softirqs.CPU89.RCU
11613 ± 2% +20.0% 13930 ± 2% softirqs.CPU9.RCU
22729 ± 5% +12.4% 25545 ± 2% softirqs.CPU9.SCHED
11302 +21.1% 13684 ± 3% softirqs.CPU90.RCU
11494 ± 4% +20.0% 13790 ± 2% softirqs.CPU91.RCU
11146 ± 2% +23.1% 13722 softirqs.CPU92.RCU
11099 ± 2% +23.4% 13693 softirqs.CPU93.RCU
11066 ± 2% +26.2% 13965 ± 2% softirqs.CPU94.RCU
20905 ± 2% +15.4% 24117 ± 6% softirqs.CPU94.SCHED
11386 ± 2% +22.6% 13963 ± 2% softirqs.CPU95.RCU
21251 ± 4% +14.4% 24302 ± 6% softirqs.CPU95.SCHED
1121124 ± 2% +21.2% 1359235 ± 2% softirqs.RCU
2068395 +15.2% 2382084 ± 4% softirqs.SCHED
stress-ng.time.system_time
17500 +-------------------------------------------------------------------+
|.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
17000 |-+ |
16500 |-+ |
| |
16000 |-+ |
15500 |-+ |
| |
15000 |-+ |
14500 |-+ O O O O O O O O O O O O O O O O O O O |
| |
14000 |-+ |
13500 |-+ |
| |
13000 +-------------------------------------------------------------------+
stress-ng.time.percent_of_cpu_this_job_got
9200 +--------------------------------------------------------------------+
|.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+..+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
9000 |-+ |
8800 |-+ |
| |
8600 |-+ |
| |
8400 |-+ |
| |
8200 |-+ |
8000 |-+ |
| |
7800 |-+ |
| O O O O O O O O O O O O O O O O O O O O |
7600 +--------------------------------------------------------------------+
stress-ng.splice.ops
1.94e+09 +----------------------------------------------------------------+
| : |
1.93e+09 |-+ : |
1.92e+09 |-+ :: |
| + :: |
1.91e+09 |-+ : + + + :: :|
1.9e+09 |-+ : : : + :: :+ .+.+. .+. : : :|
| : +.+ : : :: : :: + + +. .+ : : :|
1.89e+09 |-+ : : : : : : : + + :: : :|
1.88e+09 |:+: +.+ : : : : + :: : +. : |
|: : + : : : +.: + : : +.: |
1.87e+09 |-+ + +.+ :: + : : + |
1.86e+09 |-+ :: +. : |
| + + |
1.85e+09 +----------------------------------------------------------------+
stress-ng.splice.ops_per_sec
6.5e+07 +----------------------------------------------------------------+
| |
6.45e+07 |-+ + |
| : |
6.4e+07 |-+ + : |
| : :: |
6.35e+07 |-+ : : + + + +. :: :|
| : :.+ : :: : + : +.+.+.+.+.+. : : :|
6.3e+07 |-+ : + : : : : : : + +.+ : : :|
|: : +.+ : : : : + +: : +. :|
6.25e+07 |:+: + +.+ : +.: + : : +.: |
| + :+ + : : + |
6.2e+07 |-+ + +. : |
| + |
6.15e+07 +----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
View attachment "config-5.8.0-rc1-00140-g140402bab86b6" of type "text/plain" (206161 bytes)
View attachment "job-script" of type "text/plain" (7831 bytes)
View attachment "job.yaml" of type "text/plain" (5478 bytes)
View attachment "reproduce" of type "text/plain" (390 bytes)
Powered by blists - more mailing lists