[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190412144357.GD24200@shao2-debian>
Date: Fri, 12 Apr 2019 22:43:57 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
Willem de Bruijn <willemb@...gle.com>, lkp@...org
Subject: [tcp] 01b4c2aab8: lmbench3.TCP.socket.bandwidth.10MB.MB/sec -20.2%
regression
Greeting,
FYI, we noticed a -20.2% regression of lmbench3.TCP.socket.bandwidth.10MB.MB/sec due to commit:
commit: 01b4c2aab841d7ed9c5457371785070b2e0b53b1 ("[PATCH v3 net-next 3/3] tcp: add one skb cache for rx")
url: https://github.com/0day-ci/linux/commits/Eric-Dumazet/tcp-add-rx-tx-cache-to-reduce-lock-contention/20190323-091742
in testcase: lmbench3
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
test_memory_size: 50%
nr_threads: 100%
mode: development
test: TCP
cpufreq_governor: performance
ucode: 0xb00002e
test-url: http://www.bitmover.com/lmbench/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_threads/rootfs/tbox_group/test/test_memory_size/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/development/100%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/TCP/50%/lmbench3/0xb00002e
commit:
af0b648e98 ("tcp: add one skb cache for tx")
01b4c2aab8 ("tcp: add one skb cache for rx")
af0b648e98a72a54 01b4c2aab841d7ed9c545737178
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at_ip__netif_receive_skb_core/0x
:4 25% 1:4 dmesg.WARNING:at_ip_do_select/0x
1:4 -25% :4 dmesg.WARNING:at_ip_ip_finish_output2/0x
%stddev %change %stddev
\ | \
30.40 ± 5% -14.1% 26.11 ± 5% lmbench3.TCP.localhost.latency
99117 -20.2% 79133 lmbench3.TCP.socket.bandwidth.10MB.MB/sec
2537 -2.2% 2481 lmbench3.TCP.socket.bandwidth.64B.MB/sec
157430 -1.7% 154819 lmbench3.time.minor_page_faults
3593 +3.1% 3705 lmbench3.time.percent_of_cpu_this_job_got
22.28 ± 5% -3.0 19.29 ± 2% mpstat.cpu.all.idle%
6.19 ± 2% +1.2 7.40 ± 2% mpstat.cpu.all.soft%
508795 ± 2% +22.0% 620794 ± 2% numa-meminfo.node0.Unevictable
516356 ± 2% +18.1% 609977 ± 2% numa-meminfo.node1.Unevictable
1137258 +14.6% 1303104 ± 2% meminfo.Cached
6680579 ± 6% -12.9% 5821954 meminfo.DirectMap2M
298947 ± 3% +24.1% 371141 ± 5% meminfo.DirectMap4k
1025152 +20.1% 1230771 meminfo.Unevictable
379964 ± 75% -141.4% -157480 sched_debug.cfs_rq:/.spread0.avg
-392190 +179.0% -1094014 sched_debug.cfs_rq:/.spread0.min
4051 ± 35% +45.1% 5879 ± 14% sched_debug.cpu.load.min
9511408 ± 8% -18.7% 7730274 ± 3% sched_debug.cpu.nr_switches.max
541.00 ± 5% +17.7% 637.00 ± 8% slabinfo.kmem_cache_node.active_objs
592.00 ± 4% +16.2% 688.00 ± 7% slabinfo.kmem_cache_node.num_objs
5185 ± 3% +38.9% 7201 ± 6% slabinfo.skbuff_fclone_cache.active_objs
5187 ± 3% +38.9% 7206 ± 6% slabinfo.skbuff_fclone_cache.num_objs
21.75 ± 5% -12.6% 19.00 ± 3% vmstat.cpu.id
68.75 +4.0% 71.50 vmstat.cpu.sy
1206841 +13.7% 1372779 ± 2% vmstat.memory.cache
1846679 ± 2% -6.3% 1731096 ± 5% vmstat.system.cs
1.512e+08 ± 2% +13.2% 1.711e+08 ± 3% numa-numastat.node0.local_node
1.512e+08 ± 2% +13.2% 1.711e+08 ± 3% numa-numastat.node0.numa_hit
7091 ±173% +201.1% 21353 ± 57% numa-numastat.node0.other_node
1.48e+08 ± 2% +13.2% 1.674e+08 ± 3% numa-numastat.node1.local_node
1.48e+08 ± 2% +13.1% 1.674e+08 ± 3% numa-numastat.node1.numa_hit
2087 +3.6% 2163 turbostat.Avg_MHz
89315160 ± 13% -50.0% 44625119 ± 19% turbostat.C1
0.94 ± 13% -0.5 0.46 ± 18% turbostat.C1%
7386178 ± 49% -44.9% 4072935 ± 5% turbostat.C1E
0.39 ±146% -0.3 0.04 ± 10% turbostat.C1E%
5.109e+08 ± 15% -50.4% 2.534e+08 ± 22% cpuidle.C1.time
89317656 ± 13% -50.0% 44626550 ± 19% cpuidle.C1.usage
2.092e+08 ±145% -88.4% 24217866 ± 3% cpuidle.C1E.time
7389747 ± 49% -44.9% 4075034 ± 5% cpuidle.C1E.usage
75351513 ± 5% -30.3% 52528863 ± 16% cpuidle.POLL.time
29827485 ± 4% -27.6% 21596769 ± 16% cpuidle.POLL.usage
127198 ± 2% +22.0% 155198 ± 2% numa-vmstat.node0.nr_unevictable
127198 ± 2% +22.0% 155198 ± 2% numa-vmstat.node0.nr_zone_unevictable
25297173 ± 3% +15.2% 29129943 ± 3% numa-vmstat.node0.numa_hit
147405 +17.2% 172790 numa-vmstat.node0.numa_interleave
25289758 ± 2% +15.1% 29108245 ± 3% numa-vmstat.node0.numa_local
129088 ± 2% +18.1% 152494 ± 2% numa-vmstat.node1.nr_unevictable
129088 ± 2% +18.1% 152494 ± 2% numa-vmstat.node1.nr_zone_unevictable
24817577 ± 2% +14.5% 28404348 ± 3% numa-vmstat.node1.numa_hit
147161 +17.7% 173180 numa-vmstat.node1.numa_interleave
24646043 ± 2% +14.5% 28221056 ± 3% numa-vmstat.node1.numa_local
152712 +6.6% 162773 proc-vmstat.nr_anon_pages
241.25 +5.9% 255.50 proc-vmstat.nr_anon_transparent_hugepages
284312 +14.6% 325770 ± 2% proc-vmstat.nr_file_pages
6218 -2.9% 6039 proc-vmstat.nr_mapped
2354 +3.9% 2447 proc-vmstat.nr_page_table_pages
256287 +20.1% 307692 proc-vmstat.nr_unevictable
256287 +20.1% 307692 proc-vmstat.nr_zone_unevictable
2.988e+08 ± 2% +12.8% 3.369e+08 ± 3% proc-vmstat.numa_hit
2.988e+08 ± 2% +12.8% 3.369e+08 ± 3% proc-vmstat.numa_local
2.393e+09 ± 2% +13.0% 2.704e+09 ± 3% proc-vmstat.pgalloc_normal
2.393e+09 ± 2% +13.0% 2.703e+09 ± 3% proc-vmstat.pgfree
36.21 ± 2% -10.1% 32.54 ± 3% perf-stat.i.MPKI
2.203e+10 +3.2% 2.274e+10 perf-stat.i.branch-instructions
1.234e+08 ± 2% -6.8% 1.149e+08 perf-stat.i.cache-misses
1853039 ± 2% -6.3% 1736720 ± 5% perf-stat.i.context-switches
1.831e+11 +3.6% 1.896e+11 perf-stat.i.cpu-cycles
177350 ± 8% -39.9% 106565 ± 12% perf-stat.i.cpu-migrations
52299 ± 9% -15.0% 44447 ± 5% perf-stat.i.cycles-between-cache-misses
0.13 ± 10% -0.0 0.10 ± 6% perf-stat.i.dTLB-load-miss-rate%
24601513 ± 2% -28.8% 17506777 ± 7% perf-stat.i.dTLB-load-misses
3.49e+10 +2.5% 3.578e+10 perf-stat.i.dTLB-loads
0.05 ± 8% +0.0 0.06 ± 6% perf-stat.i.dTLB-store-miss-rate%
54414059 +11.6% 60703047 ± 5% perf-stat.i.iTLB-load-misses
13295971 ± 3% -11.2% 11805882 ± 4% perf-stat.i.iTLB-loads
1.125e+11 +3.0% 1.159e+11 perf-stat.i.instructions
3843 ± 4% +58.0% 6073 perf-stat.i.instructions-per-iTLB-miss
82.77 -2.9 79.87 perf-stat.i.node-load-miss-rate%
61535201 ± 2% -16.8% 51169722 perf-stat.i.node-loads
52.20 ± 3% -4.9 47.30 ± 4% perf-stat.i.node-store-miss-rate%
1348075 ± 6% -24.9% 1012041 ± 3% perf-stat.i.node-store-misses
1002686 ± 6% +26.7% 1269969 perf-stat.i.node-stores
14.36 -3.1% 13.93 perf-stat.overall.MPKI
2.15 -0.0 2.10 perf-stat.overall.branch-miss-rate%
7.65 ± 2% -0.5 7.14 perf-stat.overall.cache-miss-rate%
1481 ± 2% +11.2% 1646 ± 2% perf-stat.overall.cycles-between-cache-misses
0.07 ± 4% -0.0 0.05 ± 7% perf-stat.overall.dTLB-load-miss-rate%
80.36 +3.3 83.69 perf-stat.overall.iTLB-load-miss-rate%
2067 -7.5% 1913 ± 4% perf-stat.overall.instructions-per-iTLB-miss
57.31 ± 4% -13.0 44.31 perf-stat.overall.node-store-miss-rate%
2.198e+10 +3.2% 2.269e+10 perf-stat.ps.branch-instructions
1.234e+08 ± 2% -6.8% 1.15e+08 perf-stat.ps.cache-misses
1849729 ± 2% -6.3% 1733566 ± 5% perf-stat.ps.context-switches
1.827e+11 +3.6% 1.893e+11 perf-stat.ps.cpu-cycles
176984 ± 8% -39.9% 106335 ± 12% perf-stat.ps.cpu-migrations
24552688 ± 2% -28.8% 17472051 ± 7% perf-stat.ps.dTLB-load-misses
3.482e+10 +2.5% 3.57e+10 perf-stat.ps.dTLB-loads
54282300 +11.6% 60558515 ± 5% perf-stat.ps.iTLB-load-misses
13269959 ± 3% -11.2% 11782445 ± 4% perf-stat.ps.iTLB-loads
1.123e+11 +3.0% 1.156e+11 perf-stat.ps.instructions
61567371 ± 2% -16.8% 51205094 perf-stat.ps.node-loads
1345363 ± 6% -24.9% 1010000 ± 3% perf-stat.ps.node-store-misses
1001013 ± 6% +26.8% 1268788 perf-stat.ps.node-stores
31.55 ± 4% -15.5 16.09 ± 76% perf-profile.calltrace.cycles-pp.ip_finish_output2.ip_output.__ip_queue_xmit.__tcp_transmit_skb.tcp_write_xmit
28.16 ± 6% -14.1 14.10 ± 76% perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip_queue_xmit.__tcp_transmit_skb
28.00 ± 6% -14.0 14.02 ± 76% perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output.__ip_queue_xmit
27.74 ± 6% -13.9 13.86 ± 76% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output
27.62 ± 6% -13.8 13.79 ± 76% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2
27.24 ± 6% -13.7 13.53 ± 76% perf-profile.calltrace.cycles-pp.net_rx_action.__softirqentry_text_start.do_softirq_own_stack.do_softirq.__local_bh_enable_ip
26.70 ± 7% -13.5 13.25 ± 76% perf-profile.calltrace.cycles-pp.process_backlog.net_rx_action.__softirqentry_text_start.do_softirq_own_stack.do_softirq
26.05 ± 7% -13.1 12.93 ± 76% perf-profile.calltrace.cycles-pp.__netif_receive_skb_one_core.process_backlog.net_rx_action.__softirqentry_text_start.do_softirq_own_stack
25.54 ± 8% -12.9 12.61 ± 76% perf-profile.calltrace.cycles-pp.ip_rcv.__netif_receive_skb_one_core.process_backlog.net_rx_action.__softirqentry_text_start
24.81 ± 8% -12.7 12.14 ± 76% perf-profile.calltrace.cycles-pp.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog.net_rx_action
24.69 ± 9% -12.6 12.07 ± 76% perf-profile.calltrace.cycles-pp.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core.process_backlog
24.63 ± 9% -12.6 12.04 ± 76% perf-profile.calltrace.cycles-pp.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_one_core
24.34 ± 9% -12.4 11.93 ± 76% perf-profile.calltrace.cycles-pp.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver.ip_rcv
21.18 ± 12% -11.8 9.35 ± 76% perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish.ip_local_deliver
20.79 ± 12% -11.6 9.19 ± 76% perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu.ip_local_deliver_finish
15.49 ± 20% -9.4 6.12 ± 77% perf-profile.calltrace.cycles-pp.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_protocol_deliver_rcu
15.14 ± 20% -9.2 5.94 ± 77% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv
14.75 ± 20% -9.0 5.77 ± 77% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv
14.40 ± 21% -8.8 5.61 ± 77% perf-profile.calltrace.cycles-pp.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_rcv_established
18.83 ± 6% -7.6 11.23 ± 31% perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet_recvmsg.sock_read_iter.new_sync_read.vfs_read
19.13 ± 6% -7.5 11.60 ± 30% perf-profile.calltrace.cycles-pp.inet_recvmsg.sock_read_iter.new_sync_read.vfs_read.ksys_read
22.17 ± 4% -7.5 14.67 ± 38% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
22.53 ± 4% -7.4 15.13 ± 38% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
20.30 ± 5% -7.3 13.03 ± 28% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.89 ± 5% -7.3 12.63 ± 28% perf-profile.calltrace.cycles-pp.sock_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
12.67 ± 11% -7.1 5.60 ± 78% perf-profile.calltrace.cycles-pp.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_read_iter.new_sync_read
11.16 ± 12% -6.3 4.90 ± 78% perf-profile.calltrace.cycles-pp.wait_woken.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_read_iter
10.69 ± 12% -6.0 4.70 ± 78% perf-profile.calltrace.cycles-pp.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg.inet_recvmsg
10.55 ± 12% -5.9 4.64 ± 78% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg
10.35 ± 12% -5.8 4.53 ± 78% perf-profile.calltrace.cycles-pp.__sched_text_start.schedule.schedule_timeout.wait_woken.sk_wait_data
5.66 ± 21% -3.4 2.22 ± 79% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_readable
5.51 ± 22% -3.4 2.15 ± 79% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.__wake_up_common.__wake_up_common_lock
5.05 ± 21% -3.0 2.03 ± 78% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__sched_text_start.schedule.schedule_timeout.wait_woken
3.04 ± 34% -2.3 0.74 ± 75% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.99 ± 34% -2.3 0.72 ± 75% perf-profile.calltrace.cycles-pp.__sched_text_start.schedule_idle.do_idle.cpu_startup_entry.start_secondary
4.14 ± 13% -2.1 2.00 ± 75% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_readable
3.13 ± 32% -2.1 1.01 ± 76% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.__wake_up_common
2.93 ± 32% -2.0 0.94 ± 77% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__sched_text_start.schedule.schedule_timeout
3.48 ± 13% -1.9 1.59 ± 75% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.__wake_up_common.__wake_up_common_lock
1.58 ± 16% -1.1 0.52 ±105% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.__wake_up_common
1.14 ± 7% -0.7 0.47 ±104% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__sched_text_start.schedule.schedule_timeout.wait_woken
0.16 ±173% +0.8 0.95 ± 61% perf-profile.calltrace.cycles-pp.do_select.core_sys_select.kern_select.__x64_sys_select.do_syscall_64
0.14 ±173% +1.0 1.19 ± 58% perf-profile.calltrace.cycles-pp.__x64_sys_rt_sigaction.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.40 ±104% +1.3 1.72 ± 79% perf-profile.calltrace.cycles-pp.task_sched_runtime.thread_group_cputime.thread_group_cputime_adjusted.getrusage.__do_sys_getrusage
0.52 ±103% +2.0 2.48 ± 64% perf-profile.calltrace.cycles-pp.core_sys_select.kern_select.__x64_sys_select.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.77 ±104% +2.2 2.99 ± 68% perf-profile.calltrace.cycles-pp.thread_group_cputime_adjusted.getrusage.__do_sys_getrusage.do_syscall_64.entry_SYSCALL_64_after_hwframe
399.25 ± 22% +48.8% 594.00 ± 27% interrupts.47:PCI-MSI.1572878-edge.eth0-TxRx-14
399.25 ± 22% +48.8% 594.00 ± 27% interrupts.CPU14.47:PCI-MSI.1572878-edge.eth0-TxRx-14
28894 ± 27% -36.6% 18322 ± 11% interrupts.CPU15.RES:Rescheduling_interrupts
6661 ± 24% -41.3% 3911 ± 53% interrupts.CPU21.NMI:Non-maskable_interrupts
6661 ± 24% -41.3% 3911 ± 53% interrupts.CPU21.PMI:Performance_monitoring_interrupts
6395 ± 24% -40.9% 3776 ± 54% interrupts.CPU22.NMI:Non-maskable_interrupts
6395 ± 24% -40.9% 3776 ± 54% interrupts.CPU22.PMI:Performance_monitoring_interrupts
25874 ± 18% -29.3% 18288 ± 20% interrupts.CPU29.RES:Rescheduling_interrupts
26229 ± 11% -28.0% 18896 ± 13% interrupts.CPU32.RES:Rescheduling_interrupts
28466 ± 13% -35.1% 18482 ± 28% interrupts.CPU34.RES:Rescheduling_interrupts
28021 ± 26% -35.4% 18102 ± 25% interrupts.CPU36.RES:Rescheduling_interrupts
7321 -52.3% 3491 ± 67% interrupts.CPU37.NMI:Non-maskable_interrupts
7321 -52.3% 3491 ± 67% interrupts.CPU37.PMI:Performance_monitoring_interrupts
7351 -48.8% 3766 ± 55% interrupts.CPU38.NMI:Non-maskable_interrupts
7351 -48.8% 3766 ± 55% interrupts.CPU38.PMI:Performance_monitoring_interrupts
7333 -47.6% 3841 ± 52% interrupts.CPU39.NMI:Non-maskable_interrupts
7333 -47.6% 3841 ± 52% interrupts.CPU39.PMI:Performance_monitoring_interrupts
6408 ± 23% -41.3% 3759 ± 55% interrupts.CPU40.NMI:Non-maskable_interrupts
6408 ± 23% -41.3% 3759 ± 55% interrupts.CPU40.PMI:Performance_monitoring_interrupts
6416 ± 23% -35.2% 4160 ± 43% interrupts.CPU41.NMI:Non-maskable_interrupts
6416 ± 23% -35.2% 4160 ± 43% interrupts.CPU41.PMI:Performance_monitoring_interrupts
6371 ± 24% -39.8% 3838 ± 52% interrupts.CPU42.NMI:Non-maskable_interrupts
6371 ± 24% -39.8% 3838 ± 52% interrupts.CPU42.PMI:Performance_monitoring_interrupts
6383 ± 24% -37.4% 3993 ± 48% interrupts.CPU43.NMI:Non-maskable_interrupts
6383 ± 24% -37.4% 3993 ± 48% interrupts.CPU43.PMI:Performance_monitoring_interrupts
6636 ± 24% -41.7% 3868 ± 55% interrupts.CPU45.NMI:Non-maskable_interrupts
6636 ± 24% -41.7% 3868 ± 55% interrupts.CPU45.PMI:Performance_monitoring_interrupts
5684 ± 32% -31.4% 3900 ± 55% interrupts.CPU46.NMI:Non-maskable_interrupts
5684 ± 32% -31.4% 3900 ± 55% interrupts.CPU46.PMI:Performance_monitoring_interrupts
22917 ± 10% -19.3% 18485 ± 12% interrupts.CPU51.RES:Rescheduling_interrupts
24437 ± 16% -18.6% 19881 ± 21% interrupts.CPU53.RES:Rescheduling_interrupts
26653 ± 16% -30.7% 18474 ± 22% interrupts.CPU61.RES:Rescheduling_interrupts
26867 ± 9% -39.1% 16369 ± 12% interrupts.CPU62.RES:Rescheduling_interrupts
28101 ± 24% -29.3% 19868 ± 22% interrupts.CPU68.RES:Rescheduling_interrupts
30098 ± 34% -39.4% 18243 ± 18% interrupts.CPU73.RES:Rescheduling_interrupts
6384 ± 24% -59.0% 2616 ± 41% interrupts.CPU74.NMI:Non-maskable_interrupts
6384 ± 24% -59.0% 2616 ± 41% interrupts.CPU74.PMI:Performance_monitoring_interrupts
6371 ± 24% -58.7% 2633 ± 40% interrupts.CPU75.NMI:Non-maskable_interrupts
6371 ± 24% -58.7% 2633 ± 40% interrupts.CPU75.PMI:Performance_monitoring_interrupts
6410 ± 23% -59.5% 2595 ± 42% interrupts.CPU76.NMI:Non-maskable_interrupts
6410 ± 23% -59.5% 2595 ± 42% interrupts.CPU76.PMI:Performance_monitoring_interrupts
27943 ± 16% -39.8% 16824 ± 18% interrupts.CPU76.RES:Rescheduling_interrupts
5478 ± 32% -52.7% 2593 ± 41% interrupts.CPU77.NMI:Non-maskable_interrupts
5478 ± 32% -52.7% 2593 ± 41% interrupts.CPU77.PMI:Performance_monitoring_interrupts
6393 ± 24% -58.8% 2631 ± 41% interrupts.CPU78.NMI:Non-maskable_interrupts
6393 ± 24% -58.8% 2631 ± 41% interrupts.CPU78.PMI:Performance_monitoring_interrupts
6401 ± 24% -59.3% 2603 ± 41% interrupts.CPU79.NMI:Non-maskable_interrupts
6401 ± 24% -59.3% 2603 ± 41% interrupts.CPU79.PMI:Performance_monitoring_interrupts
6370 ± 24% -59.1% 2608 ± 41% interrupts.CPU80.NMI:Non-maskable_interrupts
6370 ± 24% -59.1% 2608 ± 41% interrupts.CPU80.PMI:Performance_monitoring_interrupts
7316 -64.1% 2623 ± 40% interrupts.CPU81.NMI:Non-maskable_interrupts
7316 -64.1% 2623 ± 40% interrupts.CPU81.PMI:Performance_monitoring_interrupts
7292 -64.1% 2616 ± 40% interrupts.CPU82.NMI:Non-maskable_interrupts
7292 -64.1% 2616 ± 40% interrupts.CPU82.PMI:Performance_monitoring_interrupts
7307 -63.5% 2668 ± 37% interrupts.CPU83.NMI:Non-maskable_interrupts
7307 -63.5% 2668 ± 37% interrupts.CPU83.PMI:Performance_monitoring_interrupts
7334 -64.4% 2610 ± 40% interrupts.CPU84.NMI:Non-maskable_interrupts
7334 -64.4% 2610 ± 40% interrupts.CPU84.PMI:Performance_monitoring_interrupts
7350 -54.1% 3374 ± 15% interrupts.CPU85.NMI:Non-maskable_interrupts
7350 -54.1% 3374 ± 15% interrupts.CPU85.PMI:Performance_monitoring_interrupts
7312 -59.9% 2933 ± 22% interrupts.CPU86.NMI:Non-maskable_interrupts
7312 -59.9% 2933 ± 22% interrupts.CPU86.PMI:Performance_monitoring_interrupts
7338 -43.4% 4154 ± 45% interrupts.CPU87.NMI:Non-maskable_interrupts
7338 -43.4% 4154 ± 45% interrupts.CPU87.PMI:Performance_monitoring_interrupts
42075 ± 5% -16.4% 35154 ± 6% softirqs.CPU0.SCHED
39762 ± 7% -16.8% 33092 ± 8% softirqs.CPU1.SCHED
39607 ± 7% -19.2% 32002 ± 6% softirqs.CPU10.SCHED
39624 ± 10% -19.8% 31770 ± 6% softirqs.CPU11.SCHED
38851 ± 6% -18.1% 31832 ± 7% softirqs.CPU12.SCHED
39142 ± 6% -15.6% 33029 ± 7% softirqs.CPU13.SCHED
38809 ± 7% -17.5% 32025 ± 6% softirqs.CPU14.SCHED
40786 ± 12% -21.6% 31993 ± 5% softirqs.CPU15.SCHED
38640 ± 6% -17.4% 31903 ± 7% softirqs.CPU16.SCHED
38433 ± 6% -17.1% 31877 ± 6% softirqs.CPU17.SCHED
40026 ± 10% -16.7% 33343 ± 11% softirqs.CPU18.SCHED
41468 ± 9% -25.2% 31023 ± 6% softirqs.CPU19.SCHED
38731 ± 6% -15.0% 32938 ± 4% softirqs.CPU2.SCHED
5406654 ± 5% -8.8% 4931685 ± 3% softirqs.CPU20.NET_RX
39048 ± 6% -18.1% 31963 ± 7% softirqs.CPU20.SCHED
39819 ± 8% -20.6% 31596 ± 7% softirqs.CPU21.SCHED
39843 ± 8% -20.1% 31827 ± 11% softirqs.CPU22.SCHED
39542 ± 10% -19.9% 31668 ± 11% softirqs.CPU24.SCHED
39722 ± 9% -19.2% 32088 ± 9% softirqs.CPU25.SCHED
39669 ± 10% -19.1% 32078 ± 9% softirqs.CPU26.SCHED
39649 ± 11% -19.6% 31887 ± 10% softirqs.CPU27.SCHED
39862 ± 12% -20.6% 31651 ± 10% softirqs.CPU28.SCHED
39196 ± 11% -19.5% 31533 ± 9% softirqs.CPU29.SCHED
40023 ± 9% -19.0% 32406 ± 3% softirqs.CPU3.SCHED
39523 ± 11% -19.1% 31960 ± 10% softirqs.CPU30.SCHED
39695 ± 11% -19.8% 31838 ± 8% softirqs.CPU31.SCHED
38974 ± 11% -19.5% 31383 ± 10% softirqs.CPU32.SCHED
39283 ± 11% -19.2% 31742 ± 10% softirqs.CPU33.SCHED
39383 ± 12% -19.5% 31712 ± 10% softirqs.CPU34.SCHED
40608 ± 12% -21.7% 31797 ± 12% softirqs.CPU36.SCHED
39687 ± 11% -19.6% 31927 ± 11% softirqs.CPU37.SCHED
39664 ± 10% -19.1% 32099 ± 10% softirqs.CPU38.SCHED
39632 ± 10% -19.9% 31762 ± 10% softirqs.CPU39.SCHED
38780 ± 5% -16.3% 32468 ± 7% softirqs.CPU4.SCHED
39578 ± 11% -20.5% 31466 ± 11% softirqs.CPU40.SCHED
39321 ± 10% -18.7% 31961 ± 10% softirqs.CPU41.SCHED
39030 ± 11% -16.7% 32507 ± 9% softirqs.CPU42.SCHED
38372 ± 8% -16.9% 31898 ± 10% softirqs.CPU43.SCHED
38653 ± 7% -17.3% 31949 ± 6% softirqs.CPU44.SCHED
38385 ± 6% -18.4% 31334 ± 6% softirqs.CPU45.SCHED
38379 ± 6% -16.2% 32151 ± 5% softirqs.CPU47.SCHED
38798 ± 5% -15.5% 32780 ± 6% softirqs.CPU48.SCHED
38516 ± 6% -16.0% 32351 ± 6% softirqs.CPU49.SCHED
38685 ± 7% -12.3% 33942 ± 10% softirqs.CPU5.SCHED
38451 ± 6% -16.6% 32063 ± 6% softirqs.CPU50.SCHED
38345 ± 7% -17.3% 31722 ± 6% softirqs.CPU51.SCHED
39662 ± 6% -20.0% 31733 ± 6% softirqs.CPU52.SCHED
38634 ± 7% -17.6% 31849 ± 6% softirqs.CPU53.SCHED
39172 ± 6% -19.7% 31467 ± 7% softirqs.CPU54.SCHED
5379816 ± 4% -8.8% 4905481 ± 2% softirqs.CPU55.NET_RX
38658 ± 7% -18.5% 31505 ± 6% softirqs.CPU55.SCHED
38386 ± 7% -19.0% 31095 ± 4% softirqs.CPU56.SCHED
38370 ± 5% -15.8% 32305 ± 6% softirqs.CPU57.SCHED
38724 ± 7% -18.0% 31762 ± 4% softirqs.CPU58.SCHED
38540 ± 7% -18.5% 31398 ± 5% softirqs.CPU59.SCHED
38642 ± 6% -15.0% 32829 ± 9% softirqs.CPU6.SCHED
38745 ± 7% -18.0% 31789 ± 6% softirqs.CPU60.SCHED
38501 ± 6% -17.6% 31709 ± 6% softirqs.CPU61.SCHED
38697 ± 5% -18.2% 31659 ± 6% softirqs.CPU62.SCHED
39577 ± 7% -20.4% 31503 ± 6% softirqs.CPU63.SCHED
5678631 ± 13% -16.8% 4722162 ± 2% softirqs.CPU64.NET_RX
39312 ± 7% -19.5% 31647 ± 6% softirqs.CPU64.SCHED
38828 ± 5% -18.6% 31606 ± 7% softirqs.CPU65.SCHED
39268 ± 10% -17.9% 32233 ± 11% softirqs.CPU66.SCHED
39052 ± 10% -20.2% 31151 ± 12% softirqs.CPU67.SCHED
38862 ± 12% -20.0% 31094 ± 12% softirqs.CPU68.SCHED
38819 ± 12% -19.8% 31147 ± 11% softirqs.CPU69.SCHED
39013 ± 5% -13.2% 33865 ± 4% softirqs.CPU7.SCHED
39271 ± 10% -22.9% 30261 ± 13% softirqs.CPU70.SCHED
39586 ± 10% -21.1% 31234 ± 12% softirqs.CPU71.SCHED
39970 ± 12% -22.1% 31155 ± 11% softirqs.CPU72.SCHED
39372 ± 10% -20.1% 31441 ± 11% softirqs.CPU73.SCHED
39419 ± 11% -22.7% 30488 ± 10% softirqs.CPU74.SCHED
39041 ± 11% -19.2% 31532 ± 9% softirqs.CPU75.SCHED
39211 ± 11% -20.3% 31254 ± 10% softirqs.CPU76.SCHED
38997 ± 10% -20.4% 31024 ± 11% softirqs.CPU77.SCHED
38902 ± 11% -18.3% 31788 ± 13% softirqs.CPU78.SCHED
39187 ± 11% -19.6% 31506 ± 13% softirqs.CPU79.SCHED
38878 ± 7% -17.1% 32222 ± 6% softirqs.CPU8.SCHED
38849 ± 10% -18.3% 31748 ± 13% softirqs.CPU80.SCHED
39415 ± 10% -18.2% 32243 ± 12% softirqs.CPU81.SCHED
38612 ± 11% -19.0% 31271 ± 11% softirqs.CPU82.SCHED
39255 ± 11% -18.9% 31818 ± 11% softirqs.CPU83.SCHED
38819 ± 10% -19.1% 31403 ± 10% softirqs.CPU84.SCHED
39017 ± 10% -17.7% 32115 ± 12% softirqs.CPU85.SCHED
39175 ± 10% -17.7% 32232 ± 11% softirqs.CPU86.SCHED
38852 ± 10% -17.0% 32244 ± 11% softirqs.CPU87.SCHED
41916 ± 6% -23.5% 32046 ± 8% softirqs.CPU9.SCHED
3452696 ± 8% -18.5% 2812299 ± 6% softirqs.SCHED
lmbench3.TCP.socket.bandwidth.10MB.MB_sec
120000 +-+----------------------------------------------------------------+
| |
100000 +-+++ +.+ .+ +. ++.++.+++. + .+ + .++.+|
| : : + + ++.+++.++.+ ++ + + ++.++.+ +.++.+++ |
| : : + |
80000 OO+OO:OOO OO OOO OO OOO OO |
| : : |
60000 +-+ : : |
| : : |
40000 +-+ : : |
| : : |
| :: |
20000 +-+ :: |
| :: |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
View attachment "config-5.0.0-11696-g01b4c2a" of type "text/plain" (192819 bytes)
View attachment "job-script" of type "text/plain" (7233 bytes)
View attachment "job.yaml" of type "text/plain" (4919 bytes)
View attachment "reproduce" of type "text/plain" (566 bytes)
Powered by blists - more mailing lists