[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20200326083540.GP11705@shao2-debian>
Date: Thu, 26 Mar 2020 16:35:40 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: Will Deacon <will@...nel.org>
Cc: Maddie Stone <maddiestone@...gle.com>,
Jann Horn <jannh@...gle.com>,
Kees Cook <keescook@...omium.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org
Subject: [kernel] 2a8b6290ee: netperf.Throughput_Mbps 12.6% improvement
Greeting,
FYI, we noticed a 12.6% improvement of netperf.Throughput_Mbps due to commit:
commit: 2a8b6290eeb099d80ddfd958368c71693aa16519 ("kernel-hacking: Make DEBUG_{LIST,PLIST,SG,NOTIFIERS} non-debug options")
https://git.kernel.org/cgit/linux/kernel/git/will/linux.git debug-list
in testcase: netperf
on test machine: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
with following parameters:
ip: ipv4
runtime: 300s
nr_threads: 25%
cluster: cs-localhost
send_size: 10K
test: SCTP_STREAM_MANY
cpufreq_governor: performance
ucode: 0x2000065
test-description: Netperf is a benchmark that can be use to measure various aspect of networking performance.
test-url: http://www.netperf.org/netperf/
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.brk.ops_per_sec 10.8% improvement |
| test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
| test parameters | class=os |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | fs=ext4 |
| | nr_threads=100% |
| | testtime=1s |
| | ucode=0x500002c |
+------------------+----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/send_size/tbox_group/test/testcase/ucode:
cs-localhost/gcc-7/performance/ipv4/x86_64-rhel-7.6/25%/debian-x86_64-20191114.cgz/300s/10K/lkp-skl-2sp7/SCTP_STREAM_MANY/netperf/0x2000065
commit:
faf6f40bfb ("list: Remove unnecessary WRITE_ONCE() from hlist_bl_add_before()")
2a8b6290ee ("kernel-hacking: Make DEBUG_{LIST,PLIST,SG,NOTIFIERS} non-debug options")
faf6f40bfbf9e7b6 2a8b6290eeb099d80ddfd958368
---------------- ---------------------------
%stddev %change %stddev
\ | \
9251 +12.6% 10414 netperf.Throughput_Mbps
166527 +12.6% 187463 netperf.Throughput_total_Mbps
1125 ± 3% +42.5% 1603 ± 11% netperf.time.voluntary_context_switches
6.098e+08 +12.6% 6.865e+08 netperf.workload
1.79 ± 4% +0.2 2.02 mpstat.cpu.all.usr%
1.8e+08 ± 2% +11.2% 2e+08 cpuidle.C1.usage
757162 ± 2% +13.8% 861385 cpuidle.POLL.usage
53876 ± 60% +131.6% 124784 ± 25% sched_debug.cfs_rq:/.min_vruntime.min
2271450 ± 9% +18.5% 2691264 sched_debug.cpu.nr_switches.avg
67.25 -1.9% 66.00 vmstat.cpu.id
1145809 +12.7% 1291008 vmstat.system.cs
3.931e+08 +10.2% 4.334e+08 numa-numastat.node0.local_node
3.931e+08 +10.2% 4.334e+08 numa-numastat.node0.numa_hit
3.915e+08 +14.8% 4.493e+08 numa-numastat.node1.local_node
3.915e+08 +14.8% 4.493e+08 numa-numastat.node1.numa_hit
35653 +1.0% 36001 proc-vmstat.nr_slab_unreclaimable
7.843e+08 +12.5% 8.821e+08 proc-vmstat.numa_hit
7.843e+08 +12.5% 8.821e+08 proc-vmstat.numa_local
4.529e+09 +12.5% 5.097e+09 proc-vmstat.pgalloc_normal
4.529e+09 +12.5% 5.097e+09 proc-vmstat.pgfree
11337 ± 49% -88.4% 1315 ± 28% numa-meminfo.node0.Inactive
11170 ± 50% -89.8% 1139 ± 30% numa-meminfo.node0.Inactive(anon)
12186 ± 41% -74.5% 3112 ± 32% numa-meminfo.node0.Shmem
7352 ± 76% +137.2% 17439 ± 2% numa-meminfo.node1.Inactive
7185 ± 78% +140.4% 17275 ± 2% numa-meminfo.node1.Inactive(anon)
13371 ± 16% +27.8% 17089 numa-meminfo.node1.Mapped
34955 ± 14% +28.6% 44936 ± 4% numa-meminfo.node1.Shmem
2792 ± 50% -89.8% 284.50 ± 30% numa-vmstat.node0.nr_inactive_anon
3046 ± 41% -74.4% 778.50 ± 33% numa-vmstat.node0.nr_shmem
2792 ± 50% -89.8% 284.50 ± 30% numa-vmstat.node0.nr_zone_inactive_anon
1.942e+08 +11.5% 2.164e+08 numa-vmstat.node0.numa_hit
1.94e+08 +11.5% 2.163e+08 numa-vmstat.node0.numa_local
1764 ± 79% +142.9% 4285 ± 2% numa-vmstat.node1.nr_inactive_anon
8727 ± 14% +28.7% 11234 ± 4% numa-vmstat.node1.nr_shmem
1764 ± 79% +142.9% 4285 ± 2% numa-vmstat.node1.nr_zone_inactive_anon
1.959e+08 +13.5% 2.223e+08 numa-vmstat.node1.numa_hit
1.958e+08 +13.5% 2.222e+08 numa-vmstat.node1.numa_local
26583 ± 18% +21.2% 32218 ± 8% softirqs.CPU10.SCHED
28612 ± 15% +25.0% 35754 ± 10% softirqs.CPU14.SCHED
1785731 ± 21% -78.7% 380903 ± 76% softirqs.CPU20.NET_RX
26443 ± 22% +52.9% 40429 softirqs.CPU20.SCHED
35800 ± 7% -19.8% 28704 ± 19% softirqs.CPU27.SCHED
443444 ± 83% +184.7% 1262583 ± 23% softirqs.CPU3.NET_RX
39905 ± 4% -7.9% 36764 ± 6% softirqs.CPU3.SCHED
100379 ±107% +1872.2% 1979692 ± 36% softirqs.CPU30.NET_RX
40296 ± 2% -21.2% 31761 ± 15% softirqs.CPU30.SCHED
250441 ± 88% +822.5% 2310440 ± 30% softirqs.CPU31.NET_RX
39814 ± 6% -26.9% 29116 ± 18% softirqs.CPU31.SCHED
497248 ± 67% +294.4% 1961289 ± 11% softirqs.CPU32.NET_RX
201926 ±101% +560.9% 1334478 ± 33% softirqs.CPU33.NET_RX
1774815 ± 87% -90.1% 175969 ±141% softirqs.CPU39.NET_RX
93083 +23.6% 115062 ± 21% softirqs.CPU39.TIMER
30741 ±173% +3112.4% 987511 ±140% softirqs.CPU44.NET_RX
111973 ± 4% -5.5% 105858 ± 3% softirqs.CPU47.RCU
191848 ±146% +1263.2% 2615275 ± 66% softirqs.CPU55.NET_RX
22.25 ±106% +3.7e+06% 828201 ±149% softirqs.CPU61.NET_RX
91032 ± 2% +7.4% 97788 ± 4% softirqs.CPU69.TIMER
87122553 +12.6% 98074976 softirqs.NET_RX
38.87 +1.4% 39.42 perf-stat.i.MPKI
6.952e+09 +5.4% 7.33e+09 perf-stat.i.branch-instructions
1.75 +0.1 1.82 perf-stat.i.branch-miss-rate%
1.198e+08 +10.2% 1.321e+08 perf-stat.i.branch-misses
1.338e+09 +9.1% 1.46e+09 perf-stat.i.cache-references
1155221 +12.5% 1299713 perf-stat.i.context-switches
2.27 -4.8% 2.16 perf-stat.i.cpi
7.792e+10 +2.4% 7.977e+10 perf-stat.i.cpu-cycles
200.49 ± 4% -10.8% 178.90 ± 6% perf-stat.i.cpu-migrations
0.04 +0.0 0.04 perf-stat.i.dTLB-load-miss-rate%
4155421 +12.5% 4676024 perf-stat.i.dTLB-load-misses
9.802e+09 +7.7% 1.056e+10 perf-stat.i.dTLB-loads
5.822e+09 +9.4% 6.367e+09 perf-stat.i.dTLB-stores
18611263 +12.6% 20957731 perf-stat.i.iTLB-loads
3.425e+10 +7.6% 3.685e+10 perf-stat.i.instructions
0.44 +5.1% 0.46 perf-stat.i.ipc
39.05 +1.4% 39.61 perf-stat.overall.MPKI
1.72 +0.1 1.80 perf-stat.overall.branch-miss-rate%
2.27 -4.8% 2.16 perf-stat.overall.cpi
0.04 +0.0 0.04 perf-stat.overall.dTLB-load-miss-rate%
0.44 +5.1% 0.46 perf-stat.overall.ipc
16933 -4.5% 16169 perf-stat.overall.path-length
6.928e+09 +5.5% 7.306e+09 perf-stat.ps.branch-instructions
1.194e+08 +10.2% 1.316e+08 perf-stat.ps.branch-misses
1.333e+09 +9.1% 1.455e+09 perf-stat.ps.cache-references
1151345 +12.5% 1295307 perf-stat.ps.context-switches
7.766e+10 +2.4% 7.95e+10 perf-stat.ps.cpu-cycles
199.81 ± 4% -10.7% 178.45 ± 6% perf-stat.ps.cpu-migrations
4141514 +12.5% 4660268 perf-stat.ps.dTLB-load-misses
9.769e+09 +7.7% 1.052e+10 perf-stat.ps.dTLB-loads
5.803e+09 +9.4% 6.345e+09 perf-stat.ps.dTLB-stores
18548528 +12.6% 20888960 perf-stat.ps.iTLB-loads
3.414e+10 +7.6% 3.673e+10 perf-stat.ps.instructions
1.033e+13 +7.5% 1.11e+13 perf-stat.total.instructions
9608 ± 14% -53.4% 4482 ± 73% interrupts.CPU19.RES:Rescheduling_interrupts
2741 ± 13% +104.8% 5616 ± 47% interrupts.CPU23.NMI:Non-maskable_interrupts
2741 ± 13% +104.8% 5616 ± 47% interrupts.CPU23.PMI:Performance_monitoring_interrupts
2376 ± 17% +168.8% 6389 ± 36% interrupts.CPU26.NMI:Non-maskable_interrupts
2376 ± 17% +168.8% 6389 ± 36% interrupts.CPU26.PMI:Performance_monitoring_interrupts
3004 ± 8% +120.5% 6625 ± 31% interrupts.CPU27.NMI:Non-maskable_interrupts
3004 ± 8% +120.5% 6625 ± 31% interrupts.CPU27.PMI:Performance_monitoring_interrupts
3947 ± 53% +57.7% 6227 ± 33% interrupts.CPU28.NMI:Non-maskable_interrupts
3947 ± 53% +57.7% 6227 ± 33% interrupts.CPU28.PMI:Performance_monitoring_interrupts
3651 ± 35% +81.2% 6617 ± 18% interrupts.CPU29.NMI:Non-maskable_interrupts
3651 ± 35% +81.2% 6617 ± 18% interrupts.CPU29.PMI:Performance_monitoring_interrupts
3268 ± 48% +124.1% 7322 ± 20% interrupts.CPU30.RES:Rescheduling_interrupts
3260 ± 30% +93.0% 6292 ± 26% interrupts.CPU33.NMI:Non-maskable_interrupts
3260 ± 30% +93.0% 6292 ± 26% interrupts.CPU33.PMI:Performance_monitoring_interrupts
863.25 ± 43% +541.7% 5539 ± 22% interrupts.CPU33.RES:Rescheduling_interrupts
6403 ± 25% -48.2% 3314 ± 23% interrupts.CPU4.RES:Rescheduling_interrupts
3194 ± 18% +112.1% 6773 ± 26% interrupts.CPU42.NMI:Non-maskable_interrupts
3194 ± 18% +112.1% 6773 ± 26% interrupts.CPU42.PMI:Performance_monitoring_interrupts
1707 ± 80% +190.6% 4961 ± 27% interrupts.CPU46.RES:Rescheduling_interrupts
2735 ± 56% +142.0% 6618 ± 35% interrupts.CPU5.NMI:Non-maskable_interrupts
2735 ± 56% +142.0% 6618 ± 35% interrupts.CPU5.PMI:Performance_monitoring_interrupts
5957 ± 37% -53.3% 2781 ± 50% interrupts.CPU50.NMI:Non-maskable_interrupts
5957 ± 37% -53.3% 2781 ± 50% interrupts.CPU50.PMI:Performance_monitoring_interrupts
2546 ± 30% +136.4% 6021 ± 43% interrupts.CPU51.NMI:Non-maskable_interrupts
2546 ± 30% +136.4% 6021 ± 43% interrupts.CPU51.PMI:Performance_monitoring_interrupts
1442 ± 86% +190.2% 4185 ± 37% interrupts.CPU52.RES:Rescheduling_interrupts
4038 ± 51% +144.5% 9875 ± 13% interrupts.CPU54.RES:Rescheduling_interrupts
2703 ± 38% +170.3% 7307 ± 30% interrupts.CPU55.NMI:Non-maskable_interrupts
2703 ± 38% +170.3% 7307 ± 30% interrupts.CPU55.PMI:Performance_monitoring_interrupts
2287 ± 53% +280.0% 8689 ± 38% interrupts.CPU57.RES:Rescheduling_interrupts
4992 ± 42% -37.2% 3135 ± 9% interrupts.CPU58.NMI:Non-maskable_interrupts
4992 ± 42% -37.2% 3135 ± 9% interrupts.CPU58.PMI:Performance_monitoring_interrupts
2499 ±135% +394.5% 12359 ± 74% interrupts.CPU59.RES:Rescheduling_interrupts
214.25 ± 35% +1479.1% 3383 ± 40% interrupts.CPU63.RES:Rescheduling_interrupts
6186 ± 40% -30.1% 4322 ± 54% interrupts.CPU68.NMI:Non-maskable_interrupts
6186 ± 40% -30.1% 4322 ± 54% interrupts.CPU68.PMI:Performance_monitoring_interrupts
305911 ± 4% +15.6% 353552 interrupts.RES:Rescheduling_interrupts
10.03 ± 5% -1.7 8.29 ± 6% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.kmalloc_large_node.__kmalloc_node_track_caller.__kmalloc_reserve
10.35 ± 5% -1.7 8.64 ± 6% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.kmalloc_large_node.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb
7.60 ± 4% -1.7 5.94 ± 6% perf-profile.calltrace.cycles-pp.free_one_page.__free_pages_ok.consume_skb.sctp_chunk_put.sctp_outq_sack
7.81 ± 4% -1.6 6.20 ± 6% perf-profile.calltrace.cycles-pp.__free_pages_ok.consume_skb.sctp_chunk_put.sctp_outq_sack.sctp_cmd_interpreter
7.61 ± 5% -1.6 6.00 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.kmalloc_large_node
8.29 ± 5% -1.6 6.70 ± 6% perf-profile.calltrace.cycles-pp.consume_skb.sctp_chunk_put.sctp_outq_sack.sctp_cmd_interpreter.sctp_do_sm
8.54 ± 5% -1.6 6.99 ± 6% perf-profile.calltrace.cycles-pp.sctp_chunk_put.sctp_outq_sack.sctp_cmd_interpreter.sctp_do_sm.sctp_assoc_bh_rcv
9.52 ± 5% -1.5 7.99 ± 6% perf-profile.calltrace.cycles-pp.sctp_outq_sack.sctp_cmd_interpreter.sctp_do_sm.sctp_assoc_bh_rcv.sctp_backlog_rcv
7.98 ± 5% -1.5 6.45 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.kmalloc_large_node.__kmalloc_node_track_caller
6.71 ± 4% -1.5 5.26 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_one_page.__free_pages_ok.consume_skb
7.12 ± 4% -1.4 5.73 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_one_page.__free_pages_ok.consume_skb.sctp_chunk_put
8.10 ± 5% -1.3 6.78 ± 6% perf-profile.calltrace.cycles-pp.kmalloc_large_node.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb._sctp_make_chunk
8.14 ± 5% -1.3 6.82 ± 6% perf-profile.calltrace.cycles-pp.__kmalloc_reserve.__alloc_skb._sctp_make_chunk.sctp_make_datafrag_empty.sctp_datamsg_from_user
8.12 ± 5% -1.3 6.80 ± 6% perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb._sctp_make_chunk.sctp_make_datafrag_empty
8.78 ± 5% -1.3 7.47 ± 6% perf-profile.calltrace.cycles-pp.__alloc_skb._sctp_make_chunk.sctp_make_datafrag_empty.sctp_datamsg_from_user.sctp_sendmsg_to_asoc
2.69 ± 4% -0.4 2.24 ± 8% perf-profile.calltrace.cycles-pp.free_one_page.__free_pages_ok.kfree_skb.sctp_recvmsg.inet_recvmsg
2.82 ± 4% -0.4 2.39 ± 8% perf-profile.calltrace.cycles-pp.__free_pages_ok.kfree_skb.sctp_recvmsg.inet_recvmsg.____sys_recvmsg
2.37 ± 3% -0.4 1.94 ± 8% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_one_page.__free_pages_ok.kfree_skb
2.52 ± 4% -0.4 2.13 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_one_page.__free_pages_ok.kfree_skb.sctp_recvmsg
2.50 ± 4% -0.4 2.12 ± 6% perf-profile.calltrace.cycles-pp.__alloc_skb.sctp_packet_transmit.sctp_outq_flush.sctp_cmd_interpreter.sctp_do_sm
2.37 ± 5% -0.4 2.00 ± 6% perf-profile.calltrace.cycles-pp.kmalloc_large_node.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.sctp_packet_transmit
2.38 ± 4% -0.4 2.01 ± 6% perf-profile.calltrace.cycles-pp.__kmalloc_reserve.__alloc_skb.sctp_packet_transmit.sctp_outq_flush.sctp_cmd_interpreter
2.37 ± 4% -0.4 2.00 ± 6% perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.sctp_packet_transmit.sctp_outq_flush
0.00 +0.6 0.60 ± 7% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.schedule_timeout.sctp_skb_recv_datagram
4.92 ± 6% +0.8 5.76 ± 5% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin._copy_from_iter_full.sctp_user_addto_chunk.sctp_datamsg_from_user
4.95 ± 6% +0.8 5.80 ± 5% perf-profile.calltrace.cycles-pp.copyin._copy_from_iter_full.sctp_user_addto_chunk.sctp_datamsg_from_user.sctp_sendmsg_to_asoc
0.00 +0.9 0.85 ± 8% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
5.05 ± 6% +0.9 5.91 ± 5% perf-profile.calltrace.cycles-pp._copy_from_iter_full.sctp_user_addto_chunk.sctp_datamsg_from_user.sctp_sendmsg_to_asoc.sctp_sendmsg
5.23 ± 6% +0.9 6.11 ± 5% perf-profile.calltrace.cycles-pp.sctp_user_addto_chunk.sctp_datamsg_from_user.sctp_sendmsg_to_asoc.sctp_sendmsg.sock_sendmsg
0.00 +1.1 1.11 ± 8% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.sctp_skb_recv_datagram.sctp_recvmsg
17.30 ± 4% -3.5 13.83 ± 6% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
10.29 ± 4% -2.1 8.19 ± 6% perf-profile.children.cycles-pp.free_one_page
10.65 ± 4% -2.0 8.61 ± 6% perf-profile.children.cycles-pp.__free_pages_ok
10.04 ± 5% -1.7 8.31 ± 6% perf-profile.children.cycles-pp.get_page_from_freelist
9.97 ± 4% -1.7 8.26 ± 6% perf-profile.children.cycles-pp._raw_spin_lock
10.37 ± 5% -1.7 8.65 ± 6% perf-profile.children.cycles-pp.__alloc_pages_nodemask
10.47 ± 5% -1.7 8.78 ± 6% perf-profile.children.cycles-pp.kmalloc_large_node
10.53 ± 5% -1.7 8.84 ± 6% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
10.55 ± 5% -1.7 8.87 ± 6% perf-profile.children.cycles-pp.__kmalloc_reserve
11.39 ± 5% -1.7 9.73 ± 6% perf-profile.children.cycles-pp.__alloc_skb
8.56 ± 4% -1.6 7.00 ± 6% perf-profile.children.cycles-pp.consume_skb
9.53 ± 5% -1.5 7.99 ± 6% perf-profile.children.cycles-pp.sctp_outq_sack
9.17 ± 5% -1.5 7.68 ± 6% perf-profile.children.cycles-pp.sctp_chunk_put
8.42 ± 5% -1.5 6.94 ± 6% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.15 ± 7% -0.0 0.12 ± 6% perf-profile.children.cycles-pp.sctp_auth_send_cid
0.11 ± 6% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.sctp_outq_tail
0.05 ± 8% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.check_stack_object
0.09 ± 4% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.sctp_chunk_assign_ssn
0.09 ± 4% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.update_ts_time_stats
0.07 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.sctp_packet_append_chunk
0.09 ± 7% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.nr_iowait_cpu
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.finish_task_switch
0.07 ± 7% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.sctp_v4_from_skb
0.14 ± 5% +0.0 0.16 ± 5% perf-profile.children.cycles-pp.__ip_local_out
0.15 ± 4% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.sctp_sock_rfree
0.10 ± 4% +0.0 0.12 ± 8% perf-profile.children.cycles-pp.sctp_chunk_hold
0.08 ± 11% +0.0 0.10 ± 13% perf-profile.children.cycles-pp.__genradix_ptr
0.14 ± 7% +0.0 0.16 ± 5% perf-profile.children.cycles-pp.mod_node_page_state
0.13 ± 8% +0.0 0.15 ± 5% perf-profile.children.cycles-pp.ip_send_check
0.04 ± 57% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.14 ± 3% +0.0 0.17 ± 7% perf-profile.children.cycles-pp.__switch_to
0.07 ± 13% +0.0 0.09 ± 8% perf-profile.children.cycles-pp.check_preempt_curr
0.17 ± 4% +0.0 0.20 ± 7% perf-profile.children.cycles-pp.sctp_ulpevent_receive_data
0.27 ± 3% +0.0 0.30 ± 6% perf-profile.children.cycles-pp.sctp_packet_transmit_chunk
0.29 ± 3% +0.0 0.33 ± 5% perf-profile.children.cycles-pp.set_next_entity
0.26 ± 9% +0.0 0.30 ± 5% perf-profile.children.cycles-pp.__fget_light
0.23 ± 4% +0.0 0.28 ± 7% perf-profile.children.cycles-pp.skb_release_data
0.07 ± 15% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.reweight_entity
0.39 ± 3% +0.1 0.45 ± 5% perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.26 ± 8% +0.1 0.32 ± 7% perf-profile.children.cycles-pp.sctp_addto_chunk
0.32 ± 5% +0.1 0.38 ± 8% perf-profile.children.cycles-pp.sctp_chunkify
0.65 ± 5% +0.1 0.73 ± 4% perf-profile.children.cycles-pp.__check_object_size
4.96 ± 6% +0.8 5.80 ± 5% perf-profile.children.cycles-pp.copyin
5.05 ± 6% +0.9 5.91 ± 5% perf-profile.children.cycles-pp._copy_from_iter_full
5.23 ± 6% +0.9 6.12 ± 5% perf-profile.children.cycles-pp.sctp_user_addto_chunk
8.27 ± 6% +1.3 9.56 ± 5% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.00 +2.0 1.99 ± 8% perf-profile.children.cycles-pp.__schedule
17.30 ± 4% -3.5 13.83 ± 6% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.06 ± 6% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.sock_kmalloc
0.09 ± 7% +0.0 0.11 ± 4% perf-profile.self.cycles-pp.nr_iowait_cpu
0.06 ± 11% +0.0 0.08 ± 10% perf-profile.self.cycles-pp.sctp_v4_from_skb
0.15 ± 4% +0.0 0.17 ± 4% perf-profile.self.cycles-pp.sctp_sock_rfree
0.24 ± 5% +0.0 0.26 ± 3% perf-profile.self.cycles-pp.sctp_sendmsg
0.15 ± 5% +0.0 0.18 ± 4% perf-profile.self.cycles-pp.update_load_avg
0.09 ± 7% +0.0 0.11 ± 9% perf-profile.self.cycles-pp.sctp_ulpevent_receive_data
0.14 ± 6% +0.0 0.16 ± 8% perf-profile.self.cycles-pp.__switch_to
0.04 ± 58% +0.0 0.07 ± 7% perf-profile.self.cycles-pp.sctp_chunkify
0.07 ± 13% +0.0 0.09 ± 11% perf-profile.self.cycles-pp.__genradix_ptr
0.12 ± 5% +0.0 0.15 ± 11% perf-profile.self.cycles-pp.sctp_ulpq_tail_event
0.11 ± 13% +0.0 0.14 ± 6% perf-profile.self.cycles-pp.sctp_ulpevent_make_rcvmsg
0.19 ± 3% +0.0 0.22 ± 6% perf-profile.self.cycles-pp.prep_new_page
0.07 ± 6% +0.0 0.10 ± 7% perf-profile.self.cycles-pp.sctp_addto_chunk
0.15 ± 9% +0.0 0.18 ± 8% perf-profile.self.cycles-pp.kmem_cache_alloc_node
0.23 ± 10% +0.0 0.27 ± 3% perf-profile.self.cycles-pp.sctp_skb_recv_datagram
0.34 ± 4% +0.0 0.38 ± 5% perf-profile.self.cycles-pp.__check_object_size
0.25 ± 10% +0.0 0.30 ± 5% perf-profile.self.cycles-pp.__fget_light
0.07 ± 12% +0.1 0.12 ± 5% perf-profile.self.cycles-pp.reweight_entity
0.39 ± 3% +0.1 0.44 ± 5% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.check_preempt_curr
0.26 ± 5% +0.1 0.32 ± 5% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.06 ± 7% +0.1 0.12 ± 5% perf-profile.self.cycles-pp.dequeue_task_fair
0.79 ± 6% +0.1 0.89 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.77 ± 7% +0.1 0.88 ± 6% perf-profile.self.cycles-pp.sctp_datamsg_from_user
0.14 ± 10% +0.1 0.28 ± 7% perf-profile.self.cycles-pp.sctp_sched_fcfs_dequeue
0.89 ± 6% +0.2 1.06 ± 5% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +0.4 0.44 ± 9% perf-profile.self.cycles-pp.__schedule
0.76 ± 7% +0.5 1.26 ± 5% perf-profile.self.cycles-pp.get_page_from_freelist
8.22 ± 6% +1.3 9.50 ± 5% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
netperf.Throughput_Mbps
12000 +-------------------------------------------------------------------+
| |
10000 |-+O O O O O O O O O O O O O O O O O O O O O O |
| +..+.+..+..+.+..+..+.+..+.+..+..+.+..+.+..+..+.+..+..+.+..+.+..|
| : |
8000 |:+ : |
|: : |
6000 |:+ : |
|: : |
4000 |-: : |
| : : |
| : : |
2000 |-:: |
| : |
0 +-------------------------------------------------------------------+
netperf.Throughput_total_Mbps
200000 +------------------------------------------------------------------+
180000 |-+O O O O O O O O O O O O O O O O O O O O O O |
| +..+.+..+.+..+..+.+..+.+..+..+.+..+.+..+.+..+..+.+..+.+..+.+..|
160000 |-+ : |
140000 |:+ : |
|: : |
120000 |:+ : |
100000 |:+ : |
80000 |-: : |
| : : |
60000 |-: : |
40000 |-: : |
| : |
20000 |-+: |
0 +------------------------------------------------------------------+
netperf.workload
7e+08 +-------------------------------------------------------------------+
| |
6e+08 |-+ +..+.+..+..+.+..+..+.+..+.+..+..+.+..+.+..+..+.+..+..+.+..+.+..|
| : |
5e+08 |:+ : |
|: : |
4e+08 |:+ : |
|: : |
3e+08 |-: : |
| : : |
2e+08 |-: : |
| : : |
1e+08 |-+: |
| : |
0 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
os/gcc-7/performance/1HDD/ext4/x86_64-rhel-7.6/100%/debian-x86_64-20191114.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
commit:
faf6f40bfb ("list: Remove unnecessary WRITE_ONCE() from hlist_bl_add_before()")
2a8b6290ee ("kernel-hacking: Make DEBUG_{LIST,PLIST,SG,NOTIFIERS} non-debug options")
faf6f40bfbf9e7b6 2a8b6290eeb099d80ddfd958368
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:2 -50% :3 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:2 100% 2:3 kmsg.debugfs:Directory'loop#'with_parent'block'already_present
0:2 6% 0:3 perf-profile.children.cycles-pp.error_entry
%stddev %change %stddev
\ | \
10033115 +10.8% 11120407 stress-ng.brk.ops_per_sec
6509 ± 12% -63.2% 2396 ±135% stress-ng.dentry.ops
6468 ± 12% -63.9% 2336 ±139% stress-ng.dentry.ops_per_sec
13656 -31.4% 9368 ± 31% stress-ng.io.ops
13635 -32.3% 9228 ± 32% stress-ng.io.ops_per_sec
238.50 ± 3% -6.4% 223.33 stress-ng.klog.ops
214.48 ± 4% -8.6% 196.03 stress-ng.klog.ops_per_sec
44.06 ± 11% +36.6% 60.19 ± 16% stress-ng.loop.ops_per_sec
8452 ± 11% -14.7% 7210 ± 9% stress-ng.resources.ops
3845 ± 11% -27.8% 2778 ± 11% stress-ng.resources.ops_per_sec
18641 +10.3% 20554 stress-ng.shm.ops
18631 +10.4% 20571 stress-ng.shm.ops_per_sec
901931 +16.6% 1051430 ± 5% stress-ng.sigsuspend.ops
900529 +16.6% 1049947 ± 5% stress-ng.sigsuspend.ops_per_sec
6222 ± 8% -19.1% 5036 ± 6% stress-ng.sysfs.ops
6222 ± 8% -19.1% 5035 ± 6% stress-ng.sysfs.ops_per_sec
8990 -30.0% 6295 ± 33% stress-ng.sysinfo.ops
8988 -30.1% 6278 ± 33% stress-ng.sysinfo.ops_per_sec
1.747e+08 ± 2% +4.3% 1.822e+08 stress-ng.time.minor_page_faults
40254095 +3.5% 41651588 stress-ng.time.voluntary_context_switches
6730 ± 8% -6.9% 6265 ± 11% vmstat.io.bo
62341 ± 5% +21.1% 75526 ± 10% numa-numastat.node0.other_node
84869 ± 4% -17.6% 69905 ± 11% numa-numastat.node1.other_node
47710561 ± 2% -6.3% 44680987 ± 2% meminfo.DirectMap2M
731.50 +80.5% 1320 ± 29% meminfo.Inactive(file)
122763 ± 2% -42.8% 70277 meminfo.Percpu
8.48 ± 2% +125.7% 19.15 ±113% iostat.sda.r_await.max
670.32 ± 5% -40.1% 401.69 ± 72% iostat.sda.w/s
6869 ± 8% -37.0% 4330 ± 72% iostat.sda.wkB/s
37.13 -35.0% 24.12 ± 70% iostat.sda.wrqm/s
1.384e+09 ± 52% +194.7% 4.077e+09 ± 16% cpuidle.C1E.time
10146729 ± 19% +52.3% 15452880 ± 8% cpuidle.C1E.usage
2.129e+09 ± 29% -99.6% 8058777 ± 6% cpuidle.C6.time
2875308 ± 2% -98.8% 34435 ± 6% cpuidle.C6.usage
11096262 ± 32% -33.8% 7341252 ± 2% cpuidle.POLL.time
599732 ± 6% -21.4% 471655 ± 23% numa-meminfo.node1.AnonHugePages
123966 ± 4% -5.6% 116986 ± 3% numa-meminfo.node1.KReclaimable
463254 ± 15% -24.8% 348494 ± 35% numa-meminfo.node1.Mapped
11512 ± 4% +31.3% 15111 ± 9% numa-meminfo.node1.Mlocked
123966 ± 4% -5.6% 116986 ± 3% numa-meminfo.node1.SReclaimable
69.50 ± 78% +132.6% 161.67 ± 17% numa-vmstat.node0.nr_inactive_file
337.00 ± 10% -38.1% 208.67 ± 15% numa-vmstat.node0.nr_isolated_anon
69.50 ± 78% +132.6% 161.67 ± 17% numa-vmstat.node0.nr_zone_inactive_file
120755 ± 16% -27.9% 87089 ± 36% numa-vmstat.node1.nr_mapped
30907 ± 4% -5.3% 29263 ± 3% numa-vmstat.node1.nr_slab_reclaimable
95627 ± 3% -14.6% 81620 ± 10% numa-vmstat.node1.numa_other
662.15 -20.5% 526.25 ± 14% sched_debug.cfs_rq:/.exec_clock.stddev
24123 ± 27% -36.4% 15352 ± 35% sched_debug.cfs_rq:/.load.avg
89243 ± 44% -50.5% 44141 ± 71% sched_debug.cfs_rq:/.load.stddev
68.25 ± 13% +46.8% 100.17 ± 13% sched_debug.cfs_rq:/.nr_spread_over.min
51.40 ± 15% -38.5% 31.63 ± 19% sched_debug.cfs_rq:/.nr_spread_over.stddev
182.25 ± 49% -48.8% 93.36 ± 89% sched_debug.cfs_rq:/.runnable_load_avg.max
29.27 ± 61% -59.2% 11.93 ± 66% sched_debug.cfs_rq:/.runnable_load_avg.stddev
23160 ± 30% -37.6% 14454 ± 36% sched_debug.cfs_rq:/.runnable_weight.avg
88811 ± 44% -51.1% 43457 ± 74% sched_debug.cfs_rq:/.runnable_weight.stddev
729.50 ± 9% +35.9% 991.58 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.max
160595 ± 7% -76.0% 38463 ± 89% sched_debug.cpu.avg_idle.min
173509 ± 6% +33.1% 230964 ± 15% sched_debug.cpu.avg_idle.stddev
147.12 ± 15% +33.1% 195.86 ± 2% sched_debug.cpu.nr_uninterruptible.max
145247 ± 6% -14.0% 124844 ± 12% softirqs.CPU22.NET_RX
134798 +8.1% 145752 ± 4% softirqs.CPU4.NET_RX
16916 ± 3% -5.2% 16039 ± 3% softirqs.CPU4.SCHED
84004 ± 17% -16.9% 69839 ± 3% softirqs.CPU5.TIMER
115275 ± 8% +19.9% 138177 softirqs.CPU53.NET_RX
125185 ± 8% +19.3% 149299 ± 6% softirqs.CPU54.NET_RX
111020 ± 8% +31.3% 145821 ± 3% softirqs.CPU63.NET_RX
134765 ± 4% +19.7% 161351 ± 2% softirqs.CPU65.NET_RX
124865 -17.6% 102848 ± 13% softirqs.CPU79.NET_RX
127048 ± 4% -12.2% 111544 ± 6% softirqs.CPU83.NET_RX
133678 -6.7% 124678 ± 3% softirqs.CPU84.NET_RX
134026 ± 7% -6.6% 125175 ± 4% softirqs.CPU88.NET_RX
125834 ± 4% -11.0% 112006 ± 9% softirqs.CPU89.NET_RX
740548 ± 3% -4.6% 706648 ± 6% perf-stat.i.context-switches
5.13 -1.4% 5.06 perf-stat.i.cpi
0.09 ± 6% -0.0 0.07 ± 3% perf-stat.i.dTLB-load-miss-rate%
10384399 ± 3% -5.2% 9848079 ± 4% perf-stat.i.dTLB-load-misses
81.08 -1.9 79.21 perf-stat.i.iTLB-load-miss-rate%
8226856 ± 4% -12.2% 7223576 ± 6% perf-stat.i.iTLB-loads
5906 -4.1% 5664 ± 2% perf-stat.i.instructions-per-iTLB-miss
0.37 +6.7% 0.40 ± 4% perf-stat.i.ipc
14862073 -4.3% 14221330 ± 4% perf-stat.i.node-load-misses
5.11 +1.1% 5.16 perf-stat.overall.MPKI
2.79 -0.7% 2.77 perf-stat.overall.cpi
1767 -1.3% 1744 perf-stat.overall.cycles-between-cache-misses
0.05 -0.0 0.05 perf-stat.overall.dTLB-load-miss-rate%
0.03 +0.0 0.04 perf-stat.overall.dTLB-store-miss-rate%
86.14 +1.9 88.00 perf-stat.overall.iTLB-load-miss-rate%
1448 -3.0% 1404 perf-stat.overall.instructions-per-iTLB-miss
10359389 -5.6% 9783872 ± 3% perf-stat.ps.dTLB-load-misses
8303344 -14.6% 7089134 ± 5% perf-stat.ps.iTLB-loads
14876992 -4.9% 14146480 ± 3% perf-stat.ps.node-load-misses
19117936 -4.6% 18230885 ± 3% perf-stat.ps.node-loads
283072 ± 6% +78.2% 504461 ± 56% proc-vmstat.nr_inactive_anon
182.50 +81.0% 330.33 ± 29% proc-vmstat.nr_inactive_file
60404 -1.2% 59673 proc-vmstat.nr_slab_reclaimable
131236 -2.6% 127802 ± 2% proc-vmstat.nr_slab_unreclaimable
283072 ± 6% +78.2% 504461 ± 56% proc-vmstat.nr_zone_inactive_anon
182.50 +81.0% 330.33 ± 29% proc-vmstat.nr_zone_inactive_file
5.307e+08 +4.9% 5.566e+08 proc-vmstat.numa_hit
5.306e+08 +4.9% 5.564e+08 proc-vmstat.numa_local
6.615e+08 +3.9% 6.87e+08 proc-vmstat.pgalloc_normal
1.954e+08 ± 2% +4.3% 2.039e+08 proc-vmstat.pgfault
6.612e+08 +3.9% 6.868e+08 proc-vmstat.pgfree
408265 +6.8% 435864 proc-vmstat.pglazyfree
3497 +6.8% 3734 proc-vmstat.thp_deferred_split_page
153269 -2.8% 149041 proc-vmstat.thp_fault_alloc
3497 +6.8% 3734 proc-vmstat.thp_split_page
3497 +6.8% 3734 proc-vmstat.thp_split_pmd
24768773 +7.8% 26699971 proc-vmstat.unevictable_pgs_culled
2298700 ± 2% +8.0% 2482063 ± 2% proc-vmstat.unevictable_pgs_mlocked
2297941 ± 2% +8.0% 2481826 ± 2% proc-vmstat.unevictable_pgs_munlocked
2297808 ± 2% +8.0% 2481690 ± 2% proc-vmstat.unevictable_pgs_rescued
25861 -35.3% 16726 ± 45% slabinfo.Acpi-State.active_objs
25945 -35.4% 16766 ± 45% slabinfo.Acpi-State.num_objs
10782 ± 8% -38.1% 6676 ± 7% slabinfo.RAW.active_objs
345.00 ± 8% -37.2% 216.67 ± 7% slabinfo.RAW.active_slabs
11057 ± 8% -37.2% 6948 ± 7% slabinfo.RAW.num_objs
345.00 ± 8% -37.2% 216.67 ± 7% slabinfo.RAW.num_slabs
6222 ± 3% -34.3% 4086 ± 3% slabinfo.RAWv6.active_objs
6427 ± 2% -33.5% 4273 ± 2% slabinfo.RAWv6.num_objs
2689 ± 5% -25.0% 2017 ± 38% slabinfo.TCP.active_objs
2730 ± 5% -24.9% 2049 ± 38% slabinfo.TCP.num_objs
12200 ± 7% -28.8% 8691 ± 29% slabinfo.UNIX.active_objs
393.00 ± 7% -28.5% 281.00 ± 29% slabinfo.UNIX.active_slabs
12590 ± 7% -28.4% 9008 ± 29% slabinfo.UNIX.num_objs
393.00 ± 7% -28.5% 281.00 ± 29% slabinfo.UNIX.num_slabs
5316 ± 5% +19.1% 6330 ± 9% slabinfo.ccid2_hc_tx_sock.active_objs
43534 ± 3% -12.5% 38107 ± 9% slabinfo.cred_jar.active_objs
1048 ± 3% -12.4% 918.00 ± 10% slabinfo.cred_jar.active_slabs
44061 ± 3% -12.4% 38580 ± 10% slabinfo.cred_jar.num_objs
1048 ± 3% -12.4% 918.00 ± 10% slabinfo.cred_jar.num_slabs
14076 ± 7% -44.9% 7754 ± 20% slabinfo.eventpoll_pwq.active_objs
251.00 ± 7% -45.0% 138.00 ± 20% slabinfo.eventpoll_pwq.active_slabs
14076 ± 7% -44.9% 7754 ± 20% slabinfo.eventpoll_pwq.num_objs
251.00 ± 7% -45.0% 138.00 ± 20% slabinfo.eventpoll_pwq.num_slabs
12445 ± 6% -11.9% 10970 ± 9% slabinfo.fsnotify_mark_connector.active_objs
12445 ± 6% -11.9% 10970 ± 9% slabinfo.fsnotify_mark_connector.num_objs
102916 ± 3% -11.8% 90744 ± 7% slabinfo.ftrace_event_field.active_objs
1211 ± 3% -11.8% 1068 ± 7% slabinfo.ftrace_event_field.active_slabs
102988 ± 3% -11.8% 90837 ± 7% slabinfo.ftrace_event_field.num_objs
1211 ± 3% -11.8% 1068 ± 7% slabinfo.ftrace_event_field.num_slabs
189531 ± 2% -24.0% 144058 ± 26% slabinfo.kmalloc-16.active_objs
194789 -21.0% 153852 ± 23% slabinfo.kmalloc-16.num_objs
21747 ± 2% -18.2% 17792 ± 22% slabinfo.kmalloc-192.active_objs
625.00 ± 6% -16.9% 519.33 ± 21% slabinfo.kmalloc-256.active_slabs
20005 ± 6% -16.8% 16635 ± 21% slabinfo.kmalloc-256.num_objs
625.00 ± 6% -16.9% 519.33 ± 21% slabinfo.kmalloc-256.num_slabs
11318 ± 3% -10.4% 10136 ± 4% slabinfo.kmalloc-2k.num_objs
2730 ± 5% -11.8% 2409 ± 2% slabinfo.kmalloc-4k.active_objs
404.00 ± 9% -19.5% 325.33 ± 2% slabinfo.kmalloc-4k.active_slabs
3235 ± 9% -19.5% 2605 ± 2% slabinfo.kmalloc-4k.num_objs
404.00 ± 9% -19.5% 325.33 ± 2% slabinfo.kmalloc-4k.num_slabs
31144 ± 6% -19.6% 25030 ± 24% slabinfo.kmalloc-512.active_objs
1015 ± 6% -19.1% 821.33 ± 24% slabinfo.kmalloc-512.active_slabs
32487 ± 6% -19.0% 26302 ± 24% slabinfo.kmalloc-512.num_objs
1015 ± 6% -19.1% 821.33 ± 24% slabinfo.kmalloc-512.num_slabs
201643 ± 2% -30.4% 140266 ± 35% slabinfo.kmalloc-8.active_objs
201783 ± 2% -30.5% 140266 ± 35% slabinfo.kmalloc-8.num_objs
17959 ± 4% -9.8% 16191 ± 6% slabinfo.kmalloc-96.num_objs
68411 ± 18% -8.0% 62937 ± 17% slabinfo.kmalloc-rcl-64.active_objs
1093 ± 18% -8.0% 1005 ± 17% slabinfo.kmalloc-rcl-64.active_slabs
69989 ± 18% -8.0% 64378 ± 17% slabinfo.kmalloc-rcl-64.num_objs
1093 ± 18% -8.0% 1005 ± 17% slabinfo.kmalloc-rcl-64.num_slabs
29342 ± 2% -9.2% 26640 ± 9% slabinfo.kmalloc-rcl-96.active_objs
29342 ± 2% -9.2% 26640 ± 9% slabinfo.kmalloc-rcl-96.num_objs
676.00 ± 64% -51.2% 330.00 ± 45% slabinfo.nfs_commit_data.active_objs
676.00 ± 64% -51.2% 330.00 ± 45% slabinfo.nfs_commit_data.num_objs
12088 ± 10% -18.9% 9809 ± 11% slabinfo.proc_dir_entry.active_objs
12606 ± 10% -18.2% 10306 ± 11% slabinfo.proc_dir_entry.num_objs
24949 ± 8% -13.0% 21700 ± 19% slabinfo.skbuff_head_cache.active_objs
784.00 ± 8% -12.8% 684.00 ± 19% slabinfo.skbuff_head_cache.active_slabs
25100 ± 8% -12.7% 21901 ± 19% slabinfo.skbuff_head_cache.num_objs
784.00 ± 8% -12.8% 684.00 ± 19% slabinfo.skbuff_head_cache.num_slabs
43024 ± 3% -19.1% 34786 ± 4% slabinfo.sock_inode_cache.active_objs
1122 ± 2% -18.9% 910.33 ± 3% slabinfo.sock_inode_cache.active_slabs
43794 ± 2% -18.9% 35518 ± 3% slabinfo.sock_inode_cache.num_objs
1122 ± 2% -18.9% 910.33 ± 3% slabinfo.sock_inode_cache.num_slabs
5489 ± 4% -20.2% 4378 ± 11% slabinfo.taskstats.active_objs
5489 ± 4% -20.2% 4378 ± 11% slabinfo.taskstats.num_objs
127682 ± 3% +14.6% 146296 ± 7% slabinfo.vm_area_struct.active_objs
3209 ± 3% +14.8% 3683 ± 7% slabinfo.vm_area_struct.active_slabs
128391 ± 3% +14.8% 147334 ± 7% slabinfo.vm_area_struct.num_objs
3209 ± 3% +14.8% 3683 ± 7% slabinfo.vm_area_struct.num_slabs
189.50 ± 13% -23.3% 145.33 ± 3% interrupts.100:PCI-MSI.31981633-edge.i40e-eth0-TxRx-64
247.00 ± 14% -55.7% 109.33 ± 35% interrupts.104:PCI-MSI.31981637-edge.i40e-eth0-TxRx-68
209.50 ± 10% -15.0% 178.00 ± 14% interrupts.105:PCI-MSI.31981638-edge.i40e-eth0-TxRx-69
281.00 -41.9% 163.33 ± 18% interrupts.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
239.50 ± 16% -22.5% 185.67 ± 18% interrupts.110:PCI-MSI.31981643-edge.i40e-eth0-TxRx-74
258.00 ± 18% -24.7% 194.33 ± 31% interrupts.111:PCI-MSI.31981644-edge.i40e-eth0-TxRx-75
266.50 ± 7% -9.1% 242.33 ± 13% interrupts.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
255.00 ± 9% -28.4% 182.67 ± 32% interrupts.113:PCI-MSI.31981646-edge.i40e-eth0-TxRx-77
237.50 ± 25% -32.5% 160.33 ± 46% interrupts.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
249.50 ± 2% -36.7% 158.00 ± 14% interrupts.117:PCI-MSI.31981650-edge.i40e-eth0-TxRx-81
246.50 ± 7% -38.3% 152.00 ± 37% interrupts.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
223.50 ± 21% -30.4% 155.67 ± 36% interrupts.123:PCI-MSI.31981656-edge.i40e-eth0-TxRx-87
208.50 ± 11% -26.9% 152.33 ± 39% interrupts.36:PCI-MSI.31981569-edge.i40e-eth0-TxRx-0
236.00 ± 18% -41.7% 137.67 ± 24% interrupts.39:PCI-MSI.31981572-edge.i40e-eth0-TxRx-3
223.00 ± 2% -44.1% 124.67 ± 20% interrupts.40:PCI-MSI.31981573-edge.i40e-eth0-TxRx-4
209.00 ± 21% -19.6% 168.00 ± 33% interrupts.42:PCI-MSI.31981575-edge.i40e-eth0-TxRx-6
245.00 ± 19% -39.9% 147.33 ± 30% interrupts.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
283.50 ± 5% -24.2% 215.00 ± 12% interrupts.44:PCI-MSI.31981577-edge.i40e-eth0-TxRx-8
234.00 ± 19% -19.1% 189.33 ± 29% interrupts.45:PCI-MSI.31981578-edge.i40e-eth0-TxRx-9
266.00 ± 16% -48.5% 137.00 ± 40% interrupts.46:PCI-MSI.31981579-edge.i40e-eth0-TxRx-10
228.50 ± 7% -21.5% 179.33 ± 4% interrupts.47:PCI-MSI.31981580-edge.i40e-eth0-TxRx-11
497.00 ± 48% -69.8% 150.00 ± 45% interrupts.50:PCI-MSI.31981583-edge.i40e-eth0-TxRx-14
238.00 ± 11% -31.5% 163.00 ± 29% interrupts.51:PCI-MSI.31981584-edge.i40e-eth0-TxRx-15
222.50 ± 24% -35.4% 143.67 ± 44% interrupts.52:PCI-MSI.31981585-edge.i40e-eth0-TxRx-16
216.50 ± 28% -27.6% 156.67 ± 34% interrupts.53:PCI-MSI.31981586-edge.i40e-eth0-TxRx-17
362.50 ± 34% -49.1% 184.67 ± 15% interrupts.58:PCI-MSI.31981591-edge.i40e-eth0-TxRx-22
339.50 ± 21% -54.7% 153.67 ± 23% interrupts.61:PCI-MSI.31981594-edge.i40e-eth0-TxRx-25
265.50 ± 14% -36.5% 168.67 ± 40% interrupts.66:PCI-MSI.31981599-edge.i40e-eth0-TxRx-30
334.50 ± 37% -42.3% 193.00 ± 38% interrupts.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
252.00 ± 9% -28.6% 180.00 ± 20% interrupts.71:PCI-MSI.31981604-edge.i40e-eth0-TxRx-35
220.50 ± 9% -44.2% 123.00 ± 45% interrupts.73:PCI-MSI.31981606-edge.i40e-eth0-TxRx-37
272.50 ± 11% -45.3% 149.00 ± 34% interrupts.74:PCI-MSI.31981607-edge.i40e-eth0-TxRx-38
224.00 ± 6% -38.5% 137.67 ± 52% interrupts.75:PCI-MSI.31981608-edge.i40e-eth0-TxRx-39
254.00 ± 3% -49.9% 127.33 ± 47% interrupts.76:PCI-MSI.31981609-edge.i40e-eth0-TxRx-40
252.50 ± 12% -17.1% 209.33 ± 21% interrupts.78:PCI-MSI.31981611-edge.i40e-eth0-TxRx-42
474.50 ± 31% -59.0% 194.33 ± 32% interrupts.80:PCI-MSI.31981613-edge.i40e-eth0-TxRx-44
188.00 ± 10% -22.0% 146.67 ± 33% interrupts.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
252.00 -24.6% 190.00 ± 15% interrupts.85:PCI-MSI.31981618-edge.i40e-eth0-TxRx-49
277.50 ± 18% -43.2% 157.67 ± 27% interrupts.87:PCI-MSI.31981620-edge.i40e-eth0-TxRx-51
288.00 ± 22% -51.2% 140.67 ± 40% interrupts.88:PCI-MSI.31981621-edge.i40e-eth0-TxRx-52
277.50 ± 10% -42.2% 160.33 ± 54% interrupts.91:PCI-MSI.31981624-edge.i40e-eth0-TxRx-55
233.00 ± 16% -34.2% 153.33 ± 45% interrupts.93:PCI-MSI.31981626-edge.i40e-eth0-TxRx-57
239.50 ± 4% +134.8% 562.33 ± 47% interrupts.95:PCI-MSI.31981628-edge.i40e-eth0-TxRx-59
252.50 ± 7% -27.8% 182.33 ± 31% interrupts.96:PCI-MSI.31981629-edge.i40e-eth0-TxRx-60
237.00 ± 8% -37.1% 149.00 ± 31% interrupts.97:PCI-MSI.31981630-edge.i40e-eth0-TxRx-61
5196641 -1.5% 5119586 interrupts.CAL:Function_call_interrupts
207.50 ± 11% -26.9% 151.67 ± 40% interrupts.CPU0.36:PCI-MSI.31981569-edge.i40e-eth0-TxRx-0
11528 ± 12% +43.7% 16565 ± 14% interrupts.CPU0.RES:Rescheduling_interrupts
338.00 ± 99% -42.8% 193.33 ±139% interrupts.CPU1.IWI:IRQ_work_interrupts
265.00 ± 16% -48.6% 136.33 ± 40% interrupts.CPU10.46:PCI-MSI.31981579-edge.i40e-eth0-TxRx-10
228.50 ± 7% -21.8% 178.67 ± 4% interrupts.CPU11.47:PCI-MSI.31981580-edge.i40e-eth0-TxRx-11
36249 ± 38% -47.3% 19095 ± 51% interrupts.CPU13.RES:Rescheduling_interrupts
496.50 ± 48% -69.9% 149.33 ± 45% interrupts.CPU14.50:PCI-MSI.31981583-edge.i40e-eth0-TxRx-14
49124 ± 65% -29.3% 34751 ± 88% interrupts.CPU14.RES:Rescheduling_interrupts
237.50 ± 11% -31.6% 162.33 ± 29% interrupts.CPU15.51:PCI-MSI.31981584-edge.i40e-eth0-TxRx-15
11385 ± 6% +130.0% 26186 ± 29% interrupts.CPU15.RES:Rescheduling_interrupts
10280 ± 9% -16.5% 8585 ± 5% interrupts.CPU15.TLB:TLB_shootdowns
222.00 ± 24% -35.6% 143.00 ± 44% interrupts.CPU16.52:PCI-MSI.31981585-edge.i40e-eth0-TxRx-16
216.00 ± 28% -27.8% 156.00 ± 34% interrupts.CPU17.53:PCI-MSI.31981586-edge.i40e-eth0-TxRx-17
11024 ± 4% -16.9% 9158 ± 9% interrupts.CPU18.TLB:TLB_shootdowns
11979 ± 6% -14.1% 10293 ± 2% interrupts.CPU2.TLB:TLB_shootdowns
15214 ± 3% -100.0% 0.00 interrupts.CPU20.316:PCI-MSI.376832-edge.ahci[0000:00:17.0]
34975 ± 32% -63.1% 12916 ± 17% interrupts.CPU21.RES:Rescheduling_interrupts
361.50 ± 34% -49.2% 183.67 ± 15% interrupts.CPU22.58:PCI-MSI.31981591-edge.i40e-eth0-TxRx-22
10851 ± 9% -15.1% 9210 ± 5% interrupts.CPU23.TLB:TLB_shootdowns
338.50 ± 22% -54.9% 152.67 ± 23% interrupts.CPU25.61:PCI-MSI.31981594-edge.i40e-eth0-TxRx-25
8723 ± 5% +60.3% 13986 ± 12% interrupts.CPU26.TLB:TLB_shootdowns
9499 ± 8% +35.0% 12828 ± 9% interrupts.CPU27.TLB:TLB_shootdowns
15026 ± 3% +108.6% 31337 ± 42% interrupts.CPU28.RES:Rescheduling_interrupts
236.00 ± 18% -41.7% 137.67 ± 24% interrupts.CPU3.39:PCI-MSI.31981572-edge.i40e-eth0-TxRx-3
265.00 ± 15% -36.6% 168.00 ± 40% interrupts.CPU30.66:PCI-MSI.31981599-edge.i40e-eth0-TxRx-30
9192 +47.3% 13540 ± 26% interrupts.CPU32.TLB:TLB_shootdowns
49344 ± 17% -36.8% 31186 ± 27% interrupts.CPU33.RES:Rescheduling_interrupts
334.00 ± 37% -42.4% 192.33 ± 38% interrupts.CPU34.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
8832 ± 6% +39.9% 12360 ± 10% interrupts.CPU34.TLB:TLB_shootdowns
251.00 ± 9% -28.6% 179.33 ± 21% interrupts.CPU35.71:PCI-MSI.31981604-edge.i40e-eth0-TxRx-35
54182 ± 4% -4.7% 51644 interrupts.CPU35.CAL:Function_call_interrupts
66589 ± 21% -31.2% 45839 ± 43% interrupts.CPU35.RES:Rescheduling_interrupts
220.00 ± 9% -44.4% 122.33 ± 45% interrupts.CPU37.73:PCI-MSI.31981606-edge.i40e-eth0-TxRx-37
9638 +36.3% 13140 ± 14% interrupts.CPU37.TLB:TLB_shootdowns
272.00 ± 12% -45.6% 148.00 ± 35% interrupts.CPU38.74:PCI-MSI.31981607-edge.i40e-eth0-TxRx-38
223.50 ± 6% -38.7% 137.00 ± 52% interrupts.CPU39.75:PCI-MSI.31981608-edge.i40e-eth0-TxRx-39
222.50 ± 2% -44.4% 123.67 ± 20% interrupts.CPU4.40:PCI-MSI.31981573-edge.i40e-eth0-TxRx-4
48771 ± 20% -54.5% 22178 ± 69% interrupts.CPU4.RES:Rescheduling_interrupts
253.50 ± 3% -50.0% 126.67 ± 48% interrupts.CPU40.76:PCI-MSI.31981609-edge.i40e-eth0-TxRx-40
20806 ± 44% +169.1% 55997 ± 27% interrupts.CPU41.RES:Rescheduling_interrupts
252.00 ± 11% -16.9% 209.33 ± 21% interrupts.CPU42.78:PCI-MSI.31981611-edge.i40e-eth0-TxRx-42
3579 ± 2% +104.6% 7322 interrupts.CPU43.NMI:Non-maskable_interrupts
3579 ± 2% +104.6% 7322 interrupts.CPU43.PMI:Performance_monitoring_interrupts
474.00 ± 31% -59.1% 193.67 ± 33% interrupts.CPU44.80:PCI-MSI.31981613-edge.i40e-eth0-TxRx-44
5428 ± 33% -33.7% 3598 interrupts.CPU45.NMI:Non-maskable_interrupts
5428 ± 33% -33.7% 3598 interrupts.CPU45.PMI:Performance_monitoring_interrupts
187.00 ± 10% -21.9% 146.00 ± 34% interrupts.CPU46.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
44491 ± 32% -58.3% 18541 ± 18% interrupts.CPU47.RES:Rescheduling_interrupts
251.50 -24.9% 189.00 ± 15% interrupts.CPU49.85:PCI-MSI.31981618-edge.i40e-eth0-TxRx-49
52555 ± 9% -47.1% 27784 ± 36% interrupts.CPU5.RES:Rescheduling_interrupts
17185 ± 8% +280.0% 65295 ± 59% interrupts.CPU50.RES:Rescheduling_interrupts
11869 ± 16% -22.4% 9213 ± 5% interrupts.CPU50.TLB:TLB_shootdowns
277.50 ± 18% -43.4% 157.00 ± 27% interrupts.CPU51.87:PCI-MSI.31981620-edge.i40e-eth0-TxRx-51
287.50 ± 22% -51.3% 140.00 ± 40% interrupts.CPU52.88:PCI-MSI.31981621-edge.i40e-eth0-TxRx-52
7279 -32.6% 4904 ± 35% interrupts.CPU52.NMI:Non-maskable_interrupts
7279 -32.6% 4904 ± 35% interrupts.CPU52.PMI:Performance_monitoring_interrupts
11162 ± 12% -17.9% 9161 ± 5% interrupts.CPU53.TLB:TLB_shootdowns
64556 ± 34% -56.4% 28141 ± 65% interrupts.CPU54.RES:Rescheduling_interrupts
277.00 ± 11% -42.5% 159.33 ± 54% interrupts.CPU55.91:PCI-MSI.31981624-edge.i40e-eth0-TxRx-55
60234 ± 7% -56.1% 26417 ± 52% interrupts.CPU56.RES:Rescheduling_interrupts
9718 ± 2% -9.6% 8787 ± 5% interrupts.CPU56.TLB:TLB_shootdowns
232.50 ± 16% -34.5% 152.33 ± 45% interrupts.CPU57.93:PCI-MSI.31981626-edge.i40e-eth0-TxRx-57
52441 ± 71% -54.9% 23644 ± 72% interrupts.CPU58.RES:Rescheduling_interrupts
239.00 ± 4% +135.1% 562.00 ± 47% interrupts.CPU59.95:PCI-MSI.31981628-edge.i40e-eth0-TxRx-59
208.00 ± 21% -19.7% 167.00 ± 34% interrupts.CPU6.42:PCI-MSI.31981575-edge.i40e-eth0-TxRx-6
251.50 ± 7% -27.8% 181.67 ± 31% interrupts.CPU60.96:PCI-MSI.31981629-edge.i40e-eth0-TxRx-60
236.00 ± 8% -37.3% 148.00 ± 32% interrupts.CPU61.97:PCI-MSI.31981630-edge.i40e-eth0-TxRx-61
10474 ± 4% +261.6% 37874 ± 31% interrupts.CPU61.RES:Rescheduling_interrupts
65859 ± 19% -61.5% 25369 ± 58% interrupts.CPU62.RES:Rescheduling_interrupts
188.50 ± 13% -23.4% 144.33 ± 3% interrupts.CPU64.100:PCI-MSI.31981633-edge.i40e-eth0-TxRx-64
11472 ± 2% +37.6% 15784 ± 14% interrupts.CPU67.RES:Rescheduling_interrupts
246.50 ± 14% -56.1% 108.33 ± 35% interrupts.CPU68.104:PCI-MSI.31981637-edge.i40e-eth0-TxRx-68
208.50 ± 10% -15.1% 177.00 ± 14% interrupts.CPU69.105:PCI-MSI.31981638-edge.i40e-eth0-TxRx-69
244.00 ± 19% -39.8% 147.00 ± 30% interrupts.CPU7.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
22442 ± 8% +67.5% 37588 ± 4% interrupts.CPU70.RES:Rescheduling_interrupts
12712 ± 15% -31.2% 8743 ± 11% interrupts.CPU71.TLB:TLB_shootdowns
280.00 -42.0% 162.33 ± 18% interrupts.CPU73.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
53387 ± 44% -52.5% 25361 ± 71% interrupts.CPU73.RES:Rescheduling_interrupts
239.00 ± 16% -22.6% 185.00 ± 17% interrupts.CPU74.110:PCI-MSI.31981643-edge.i40e-eth0-TxRx-74
8696 ± 7% +44.6% 12576 ± 6% interrupts.CPU74.TLB:TLB_shootdowns
257.00 ± 19% -24.8% 193.33 ± 31% interrupts.CPU75.111:PCI-MSI.31981644-edge.i40e-eth0-TxRx-75
265.50 ± 7% -9.1% 241.33 ± 13% interrupts.CPU76.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
254.50 ± 8% -28.5% 182.00 ± 32% interrupts.CPU77.113:PCI-MSI.31981646-edge.i40e-eth0-TxRx-77
11264 ± 6% -9.9% 10148 ± 15% interrupts.CPU77.TLB:TLB_shootdowns
283.00 ± 4% -24.3% 214.33 ± 12% interrupts.CPU8.44:PCI-MSI.31981577-edge.i40e-eth0-TxRx-8
45550 -67.7% 14725 ± 27% interrupts.CPU8.RES:Rescheduling_interrupts
237.00 ± 25% -32.5% 160.00 ± 46% interrupts.CPU80.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
7287 -33.0% 4879 ± 35% interrupts.CPU80.NMI:Non-maskable_interrupts
7287 -33.0% 4879 ± 35% interrupts.CPU80.PMI:Performance_monitoring_interrupts
249.00 ± 2% -36.9% 157.00 ± 14% interrupts.CPU81.117:PCI-MSI.31981650-edge.i40e-eth0-TxRx-81
31280 ± 2% -43.2% 17755 ± 17% interrupts.CPU81.RES:Rescheduling_interrupts
223.00 ± 5% -26.0% 165.00 ± 36% interrupts.CPU82.118:PCI-MSI.31981651-edge.i40e-eth0-TxRx-82
18648 ± 9% +194.1% 54837 ± 49% interrupts.CPU82.RES:Rescheduling_interrupts
28294 ± 47% -55.4% 12605 ± 13% interrupts.CPU83.RES:Rescheduling_interrupts
8396 ± 5% +36.6% 11470 ± 17% interrupts.CPU84.TLB:TLB_shootdowns
245.50 ± 7% -38.5% 151.00 ± 38% interrupts.CPU85.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
223.00 ± 21% -30.2% 155.67 ± 36% interrupts.CPU87.123:PCI-MSI.31981656-edge.i40e-eth0-TxRx-87
35786 ± 8% +167.3% 95668 ± 31% interrupts.CPU87.RES:Rescheduling_interrupts
234.00 ± 19% -19.2% 189.00 ± 29% interrupts.CPU9.45:PCI-MSI.31981578-edge.i40e-eth0-TxRx-9
49224 ± 3% -40.4% 29349 ± 29% interrupts.CPU9.RES:Rescheduling_interrupts
9201 +27.4% 11724 ± 7% interrupts.CPU90.TLB:TLB_shootdowns
7248 -33.0% 4859 ± 35% interrupts.CPU91.NMI:Non-maskable_interrupts
7248 -33.0% 4859 ± 35% interrupts.CPU91.PMI:Performance_monitoring_interrupts
40308 ± 50% -39.3% 24465 ± 66% interrupts.CPU91.RES:Rescheduling_interrupts
22340 ± 24% -21.4% 17561 ± 39% interrupts.CPU94.RES:Rescheduling_interrupts
17.84 ± 45% -10.5 7.38 ± 15% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
17.70 ± 46% -10.4 7.32 ± 15% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.51 ±100% -8.5 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_move_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.51 ±100% -8.5 0.00 perf-profile.calltrace.cycles-pp.kernel_move_pages.__x64_sys_move_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.93 -3.8 7.14 ± 70% perf-profile.calltrace.cycles-pp.personality
11.08 ± 33% -3.3 7.79 ± 70% perf-profile.calltrace.cycles-pp.__GI___libc_read
10.79 ± 33% -3.2 7.55 ± 72% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read
10.74 ± 34% -3.2 7.51 ± 72% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
10.53 ± 34% -3.2 7.31 ± 73% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
10.63 ± 34% -3.2 7.43 ± 72% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
9.39 ± 9% -2.9 6.51 ± 53% perf-profile.calltrace.cycles-pp.__GI___libc_open
9.38 ± 9% -2.9 6.50 ± 53% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_open
9.38 ± 9% -2.9 6.50 ± 53% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_open
2.88 ±100% -2.9 0.00 perf-profile.calltrace.cycles-pp.__sched_text_start.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.44 ± 10% -2.6 5.82 ± 53% perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_open
8.44 ± 10% -2.6 5.83 ± 53% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_open
8.38 ± 10% -2.6 5.79 ± 53% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.38 ± 10% -2.6 5.79 ± 53% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
6.42 -2.2 4.18 ± 70% perf-profile.calltrace.cycles-pp.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64
6.52 -2.0 4.49 ± 59% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.37 ±100% -2.0 1.39 ±141% perf-profile.calltrace.cycles-pp.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.96 ±100% -1.7 1.23 ±141% perf-profile.calltrace.cycles-pp.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.40 -1.6 2.81 ± 70% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_read.new_sync_read.vfs_read.ksys_read
4.08 -1.5 2.60 ± 70% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_read.new_sync_read.vfs_read
4.06 -1.5 2.59 ± 70% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_read.new_sync_read
4.05 -1.5 2.59 ± 70% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_read
3.80 -1.4 2.42 ± 70% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
3.67 -1.3 2.34 ± 70% perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
3.34 -1.2 2.13 ± 70% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
3.29 -1.2 2.10 ± 70% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
3.12 ± 2% -1.1 1.98 ± 70% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.personality
3.81 ± 2% -1.1 2.69 ± 56% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.31 -1.1 2.19 ± 70% perf-profile.calltrace.cycles-pp.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
3.76 -1.1 2.67 ± 51% perf-profile.calltrace.cycles-pp.vfs_tmpfile.path_openat.do_filp_open.do_sys_openat2.do_sys_open
3.71 -1.1 2.62 ± 56% perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
3.44 ±100% -1.1 2.36 ±141% perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.92 -1.1 1.85 ± 70% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
2.92 -1.1 1.85 ± 70% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
2.91 -1.1 1.85 ± 70% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
2.92 ± 3% -0.8 2.10 ± 51% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_close
2.02 -0.7 1.27 ± 70% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.personality
1.91 -0.7 1.23 ± 70% perf-profile.calltrace.cycles-pp.shmem_tmpfile.vfs_tmpfile.path_openat.do_filp_open.do_sys_openat2
1.83 -0.7 1.18 ± 70% perf-profile.calltrace.cycles-pp.path_lookupat.path_openat.do_filp_open.do_sys_openat2.do_sys_open
1.83 -0.6 1.19 ± 70% perf-profile.calltrace.cycles-pp.d_alloc.vfs_tmpfile.path_openat.do_filp_open.do_sys_openat2
1.84 -0.6 1.19 ± 70% perf-profile.calltrace.cycles-pp.d_tmpfile.shmem_tmpfile.vfs_tmpfile.path_openat.do_filp_open
1.83 -0.6 1.18 ± 70% perf-profile.calltrace.cycles-pp._raw_spin_lock.d_tmpfile.shmem_tmpfile.vfs_tmpfile.path_openat
1.81 -0.6 1.17 ± 70% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.d_tmpfile.shmem_tmpfile.vfs_tmpfile
1.81 -0.6 1.17 ± 70% perf-profile.calltrace.cycles-pp.unlazy_walk.complete_walk.path_lookupat.path_openat.do_filp_open
1.81 -0.6 1.17 ± 70% perf-profile.calltrace.cycles-pp.complete_walk.path_lookupat.path_openat.do_filp_open.do_sys_openat2
1.81 -0.6 1.17 ± 70% perf-profile.calltrace.cycles-pp.legitimize_path.unlazy_walk.complete_walk.path_lookupat.path_openat
1.81 -0.6 1.17 ± 70% perf-profile.calltrace.cycles-pp.lockref_get_not_dead.legitimize_path.unlazy_walk.complete_walk.path_lookupat
1.80 -0.6 1.16 ± 70% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.lockref_get_not_dead.legitimize_path.unlazy_walk
1.81 -0.6 1.17 ± 70% perf-profile.calltrace.cycles-pp._raw_spin_lock.lockref_get_not_dead.legitimize_path.unlazy_walk.complete_walk
1.81 -0.6 1.18 ± 70% perf-profile.calltrace.cycles-pp._raw_spin_lock.d_alloc.vfs_tmpfile.path_openat.do_filp_open
1.81 -0.6 1.17 ± 70% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.d_alloc.vfs_tmpfile.path_openat
1.40 -0.5 0.88 ± 70% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
1.40 -0.5 0.89 ± 70% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
1.31 -0.5 0.83 ± 70% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.29 -0.5 0.82 ± 70% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.36 -0.5 0.89 ± 70% perf-profile.calltrace.cycles-pp._raw_spin_lock.__lock_parent.dput.__fput.task_work_run
1.36 -0.5 0.89 ± 70% perf-profile.calltrace.cycles-pp.__lock_parent.dput.__fput.task_work_run.exit_to_usermode_loop
1.35 -0.5 0.89 ± 70% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__lock_parent.dput.__fput
1.20 ± 3% -0.4 0.75 ± 70% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.23 -0.4 0.78 ± 70% perf-profile.calltrace.cycles-pp.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.23 -0.4 0.78 ± 70% perf-profile.calltrace.cycles-pp.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.08 -0.4 0.71 ± 70% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.personality
0.96 -0.3 0.61 ± 70% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues
0.96 -0.3 0.61 ± 70% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.hrtimer_wakeup
0.96 -0.3 0.61 ± 70% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
0.94 -0.3 0.61 ± 70% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_open
0.94 -0.3 0.61 ± 70% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_open
0.86 -0.3 0.55 ± 70% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.85 -0.3 0.55 ± 70% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.83 ± 2% -0.2 1.66 ± 10% perf-profile.calltrace.cycles-pp._vm_unmap_aliases.change_page_attr_set_clr.set_memory_x.bpf_int_jit_compile.bpf_prog_select_runtime
1.75 ± 2% -0.2 1.58 ± 10% perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock._vm_unmap_aliases.change_page_attr_set_clr.set_memory_x
1.79 ± 2% -0.2 1.64 ± 10% perf-profile.calltrace.cycles-pp.__mutex_lock._vm_unmap_aliases.change_page_attr_set_clr.set_memory_x.bpf_int_jit_compile
1.86 ± 2% -0.2 1.71 ± 9% perf-profile.calltrace.cycles-pp.change_page_attr_set_clr.set_memory_x.bpf_int_jit_compile.bpf_prog_select_runtime.bpf_prepare_filter
1.86 ± 2% -0.2 1.71 ± 9% perf-profile.calltrace.cycles-pp.set_memory_x.bpf_int_jit_compile.bpf_prog_select_runtime.bpf_prepare_filter.bpf_prog_create_from_user
0.58 ± 5% +0.1 0.73 ± 6% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.59 ± 5% +0.2 0.75 ± 6% perf-profile.calltrace.cycles-pp.page_fault
0.26 ±100% +0.4 0.62 ± 6% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_page_fault.page_fault
0.26 ±100% +0.4 0.64 ± 6% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_page_fault.page_fault
0.00 +0.6 0.58 ± 7% perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault.page_fault
5.03 ± 53% -5.0 0.00 perf-profile.children.cycles-pp.__sched_text_start
11.17 ± 32% -3.3 7.87 ± 70% perf-profile.children.cycles-pp.__GI___libc_read
10.54 ± 34% -3.2 7.33 ± 73% perf-profile.children.cycles-pp.vfs_read
10.64 ± 34% -3.2 7.44 ± 72% perf-profile.children.cycles-pp.ksys_read
14.98 ± 4% -3.2 11.83 ± 31% perf-profile.children.cycles-pp._raw_spin_lock
9.39 ± 9% -2.9 6.51 ± 53% perf-profile.children.cycles-pp.__GI___libc_open
8.44 ± 10% -2.6 5.83 ± 53% perf-profile.children.cycles-pp.do_sys_open
8.44 ± 10% -2.6 5.83 ± 53% perf-profile.children.cycles-pp.do_sys_openat2
8.39 ± 10% -2.6 5.79 ± 53% perf-profile.children.cycles-pp.do_filp_open
8.38 ± 10% -2.6 5.79 ± 53% perf-profile.children.cycles-pp.path_openat
6.43 -2.2 4.19 ± 70% perf-profile.children.cycles-pp.pipe_read
28.86 ± 2% -2.1 26.78 ± 2% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
6.53 -2.0 4.49 ± 59% perf-profile.children.cycles-pp.new_sync_read
6.62 ± 3% -1.9 4.77 ± 43% perf-profile.children.cycles-pp.try_to_wake_up
4.94 -1.8 3.17 ± 69% perf-profile.children.cycles-pp.__wake_up_common_lock
4.54 -1.6 2.90 ± 70% perf-profile.children.cycles-pp.autoremove_wake_function
4.60 -1.6 2.96 ± 69% perf-profile.children.cycles-pp.__wake_up_common
4.56 -1.5 3.04 ± 68% perf-profile.children.cycles-pp.__GI___libc_write
4.75 ± 3% -1.4 3.38 ± 45% perf-profile.children.cycles-pp.enqueue_task_fair
4.75 ± 3% -1.4 3.38 ± 45% perf-profile.children.cycles-pp.activate_task
4.73 ± 3% -1.4 3.38 ± 45% perf-profile.children.cycles-pp.ttwu_do_activate
4.64 ± 3% -1.3 3.31 ± 45% perf-profile.children.cycles-pp.enqueue_entity
3.95 -1.3 2.64 ± 68% perf-profile.children.cycles-pp.ksys_write
3.83 -1.3 2.56 ± 68% perf-profile.children.cycles-pp.vfs_write
4.44 ± 2% -1.3 3.18 ± 45% perf-profile.children.cycles-pp.__account_scheduler_latency
3.45 -1.2 2.28 ± 70% perf-profile.children.cycles-pp.new_sync_write
3.33 -1.1 2.20 ± 70% perf-profile.children.cycles-pp.pipe_write
4.69 ± 3% -1.1 3.58 ± 42% perf-profile.children.cycles-pp.entry_SYSCALL_64
3.76 -1.1 2.67 ± 51% perf-profile.children.cycles-pp.vfs_tmpfile
3.87 ± 3% -1.1 2.79 ± 52% perf-profile.children.cycles-pp.task_work_run
3.85 ± 3% -1.1 2.78 ± 52% perf-profile.children.cycles-pp.__fput
3.44 ±100% -1.1 2.36 ±141% perf-profile.children.cycles-pp.proc_reg_read
3.91 ± 3% -1.1 2.85 ± 49% perf-profile.children.cycles-pp.exit_to_usermode_loop
3.87 ± 9% -1.0 2.84 ± 42% perf-profile.children.cycles-pp.syscall_return_via_sysret
2.86 ± 2% -0.7 2.15 ± 39% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.88 -0.7 1.20 ± 70% perf-profile.children.cycles-pp.hrtimer_wakeup
2.29 -0.7 1.63 ± 52% perf-profile.children.cycles-pp.__hrtimer_run_queues
3.08 -0.6 2.45 ± 34% perf-profile.children.cycles-pp.apic_timer_interrupt
2.45 -0.6 1.82 ± 45% perf-profile.children.cycles-pp.hrtimer_interrupt
1.91 -0.6 1.35 ± 51% perf-profile.children.cycles-pp.shmem_tmpfile
1.85 ± 2% -0.5 1.31 ± 52% perf-profile.children.cycles-pp.lockref_get_not_dead
1.83 -0.5 1.30 ± 51% perf-profile.children.cycles-pp.path_lookupat
1.84 -0.5 1.31 ± 51% perf-profile.children.cycles-pp.d_alloc
1.83 -0.5 1.30 ± 51% perf-profile.children.cycles-pp.legitimize_path
1.84 -0.5 1.31 ± 51% perf-profile.children.cycles-pp.d_tmpfile
1.83 -0.5 1.30 ± 51% perf-profile.children.cycles-pp.unlazy_walk
1.83 -0.5 1.30 ± 51% perf-profile.children.cycles-pp.complete_walk
1.81 -0.5 1.31 ± 52% perf-profile.children.cycles-pp.__lock_parent
1.21 ± 3% -0.3 0.87 ± 43% perf-profile.children.cycles-pp.schedule_idle
1.86 ± 2% -0.2 1.71 ± 9% perf-profile.children.cycles-pp.set_memory_x
0.47 ± 19% -0.1 0.35 ± 42% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.23 ± 2% -0.0 0.21 ± 6% perf-profile.children.cycles-pp.scheduler_tick
0.30 -0.0 0.29 perf-profile.children.cycles-pp.tick_sched_handle
0.14 ± 3% +0.0 0.15 ± 3% perf-profile.children.cycles-pp.native_irq_return_iret
0.03 ±100% +0.0 0.07 ± 14% perf-profile.children.cycles-pp.isolate_lru_page
0.24 ± 16% +0.0 0.29 perf-profile.children.cycles-pp.rmap_walk_anon
0.00 +0.1 0.05 perf-profile.children.cycles-pp.vm_normal_page
0.00 +4.9 4.87 ± 44% perf-profile.children.cycles-pp.__schedule
28.77 ± 2% -2.1 26.72 ± 2% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
3.83 ± 8% -1.0 2.82 ± 41% perf-profile.self.cycles-pp.syscall_return_via_sysret
1.38 ± 7% -0.4 0.94 ± 43% perf-profile.self.cycles-pp.do_syscall_64
1.30 ± 7% -0.4 0.93 ± 33% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.31 ± 3% -0.1 0.20 ± 70% perf-profile.self.cycles-pp.__mutex_lock
0.08 ± 6% -0.0 0.04 ± 71% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.14 ± 3% +0.0 0.15 ± 3% perf-profile.self.cycles-pp.native_irq_return_iret
0.03 ±100% +0.0 0.06 perf-profile.self.cycles-pp.migrate_page_copy
0.00 +0.5 0.46 ± 62% perf-profile.self.cycles-pp.__schedule
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
View attachment "config-5.6.0-rc6-00177-g2a8b6290eeb09" of type "text/plain" (203678 bytes)
View attachment "job-script" of type "text/plain" (7993 bytes)
View attachment "job.yaml" of type "text/plain" (5580 bytes)
View attachment "reproduce" of type "text/plain" (1549 bytes)
Powered by blists - more mailing lists