[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190228071751.GE10770@shao2-debian>
Date: Thu, 28 Feb 2019 15:17:51 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Giovanni Gherdovich <ggherdovich@...e.cz>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Matt Fleming <matt@...eblueprint.co.uk>,
Mike Galbraith <efault@....de>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [LKP] [sched/fair] 2c83362734: pft.faults_per_sec_per_cpu -41.4%
regression
Greeting,
FYI, we noticed a -41.4% regression of pft.faults_per_sec_per_cpu due to commit:
commit: 2c83362734dad8e48ccc0710b5cd2436a0323893 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: pft
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
runtime: 300s
nr_task: 50%
cpufreq_governor: performance
ucode: 0xb00002e
test-description: Pft is the page fault test micro benchmark.
test-url: https://github.com/gormanm/pft
In addition to that, the commit also has significant impact on the following tests:
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=10000000 |
| | cpufreq_governor=performance |
| | nr_threads=25% |
| | omp=true |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.jobs_per_min 1.3% improvement |
| test machine | 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | nr_job=3000 |
| | nr_task=100% |
| | runtime=300s |
| | test=custom |
| | ucode=0x3d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: stream.add_bandwidth_MBps -32.0% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=10000000 |
| | cpufreq_governor=performance |
| | nr_threads=50% |
| | omp=true |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | plzip: |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_threads=100% |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.jobs_per_min -11.9% regression |
| test machine | 192 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 512G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=all_utime |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -7.3% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | mode=process |
| | nr_threads=1600% |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.std_dev_percent 11.4% undefined |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=custom |
| | ucode=0x200004d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: boot-time.boot 95.3% regression |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=alltests |
| | ucode=0x200004d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | pft: pft.faults_per_sec_per_cpu -42.7% regression |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=50% |
| | runtime=300s |
| | ucode=0x200004d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: stream.add_bandwidth_MBps -28.8% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=50000000 |
| | cpufreq_governor=performance |
| | nr_threads=50% |
| | omp=true |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: stream.add_bandwidth_MBps -30.6% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=10000000 |
| | cpufreq_governor=performance |
| | nr_threads=50% |
| | omp=true |
+------------------+--------------------------------------------------------------------------+
| testcase: change | pft: pft.faults_per_sec_per_cpu -42.5% regression |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=50% |
| | runtime=300s |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.child_systime -1.4% undefined |
| test machine | 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory |
| test parameters | cpufreq_governor=performance |
| | iterations=30 |
| | nr_task=1600% |
| | test=compute |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.fifo.ops_per_sec 76.2% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=pipe |
| | cpufreq_governor=performance |
| | nr_threads=100% |
| | testtime=1s |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.tsearch.ops_per_sec -17.1% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=cpu |
| | cpufreq_governor=performance |
| | nr_threads=100% |
| | testtime=1s |
+------------------+--------------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/50%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ep3/pft/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=_cond_resched/0x
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
%stddev %change %stddev
\ | \
250875 -41.4% 146900 pft.faults_per_sec_per_cpu
7127548 -32.9% 4779214 pft.time.minor_page_faults
3533 +14.5% 4047 pft.time.percent_of_cpu_this_job_got
10444 +13.3% 11828 pft.time.system_time
189.79 +84.8% 350.70 ± 5% pft.time.user_time
105380 ± 2% +31.5% 138536 ± 5% pft.time.voluntary_context_switches
6180331 ± 9% -60.3% 2451607 ± 34% numa-numastat.node1.local_node
6187616 ± 9% -60.3% 2455998 ± 34% numa-numastat.node1.numa_hit
58.50 -9.8% 52.75 vmstat.cpu.id
35.75 +14.0% 40.75 vmstat.procs.r
59.07 -6.1 52.99 mpstat.cpu.idle%
39.94 +5.5 45.45 mpstat.cpu.sys%
0.93 +0.6 1.54 ± 5% mpstat.cpu.usr%
147799 ± 6% +13.9% 168315 ± 5% cpuidle.C3.usage
1.532e+10 ± 3% -11.9% 1.35e+10 cpuidle.C6.time
15901515 ± 3% -12.4% 13931328 cpuidle.C6.usage
1.062e+08 ± 9% +123.1% 2.369e+08 ± 14% cpuidle.POLL.time
2491 ± 8% -28.0% 1793 ± 10% slabinfo.eventpoll_epi.active_objs
2491 ± 8% -28.0% 1793 ± 10% slabinfo.eventpoll_epi.num_objs
4332 ± 7% -27.9% 3125 ± 10% slabinfo.eventpoll_pwq.active_objs
4332 ± 7% -27.9% 3125 ± 10% slabinfo.eventpoll_pwq.num_objs
4601 ± 2% -16.0% 3866 ± 2% slabinfo.mm_struct.active_objs
4662 ± 2% -15.7% 3930 ± 2% slabinfo.mm_struct.num_objs
5396205 ± 2% +18.2% 6378310 meminfo.Active
5333263 ± 2% +18.4% 6316138 meminfo.Active(anon)
2054416 ± 5% +22.7% 2521118 ± 16% meminfo.AnonHugePages
5180518 ± 2% +18.7% 6147856 meminfo.AnonPages
4.652e+08 ± 2% +16.4% 5.416e+08 meminfo.Committed_AS
6863577 +13.8% 7807920 meminfo.Memused
9381 ± 2% +11.7% 10480 ± 7% meminfo.PageTables
1203 +15.0% 1383 turbostat.Avg_MHz
43.02 +6.3 49.30 turbostat.Busy%
147383 ± 6% +14.1% 168181 ± 5% turbostat.C3
15900108 ± 3% -12.4% 13931193 turbostat.C6
56.94 -6.2 50.74 turbostat.C6%
33.20 -47.1% 17.57 ± 4% turbostat.CPU%c1
23.69 ± 4% +39.5% 33.05 ± 2% turbostat.CPU%c6
186.47 -9.2% 169.38 turbostat.PkgWatt
18.43 -19.6% 14.82 turbostat.RAMWatt
715561 ± 15% +70.3% 1218740 ± 16% numa-vmstat.node0.nr_active_anon
701119 ± 15% +69.9% 1190948 ± 15% numa-vmstat.node0.nr_anon_pages
544.25 ± 20% +80.0% 979.50 ± 20% numa-vmstat.node0.nr_anon_transparent_hugepages
1240 ± 15% +37.7% 1709 ± 16% numa-vmstat.node0.nr_page_table_pages
715526 ± 15% +70.3% 1218728 ± 16% numa-vmstat.node0.nr_zone_active_anon
636795 ± 10% -44.9% 350831 ± 53% numa-vmstat.node1.nr_active_anon
614709 ± 11% -45.1% 337557 ± 54% numa-vmstat.node1.nr_anon_pages
636843 ± 11% -44.9% 350830 ± 53% numa-vmstat.node1.nr_zone_active_anon
3520461 ± 10% -41.3% 2066299 ± 34% numa-vmstat.node1.numa_hit
3380522 ± 10% -42.9% 1929309 ± 37% numa-vmstat.node1.numa_local
2860563 ± 14% +73.7% 4970140 ± 14% numa-meminfo.node0.Active
2829095 ± 14% +74.6% 4938791 ± 14% numa-meminfo.node0.Active(anon)
1029787 ± 15% +90.1% 1957511 ± 16% numa-meminfo.node0.AnonHugePages
2782901 ± 14% +73.2% 4820952 ± 13% numa-meminfo.node0.AnonPages
3598217 ± 11% +59.0% 5721834 ± 12% numa-meminfo.node0.MemUsed
4779 ± 14% +39.3% 6658 ± 13% numa-meminfo.node0.PageTables
2711469 ± 11% -48.3% 1401167 ± 53% numa-meminfo.node1.Active
2679996 ± 11% -48.9% 1370343 ± 55% numa-meminfo.node1.Active(anon)
1104830 ± 11% -52.3% 527490 ± 43% numa-meminfo.node1.AnonHugePages
2592998 ± 13% -50.4% 1285649 ± 54% numa-meminfo.node1.AnonPages
3441628 ± 9% -39.6% 2078804 ± 37% numa-meminfo.node1.MemUsed
1348502 ± 3% +15.9% 1562690 ± 2% proc-vmstat.nr_active_anon
1313538 ± 3% +16.0% 1523248 ± 2% proc-vmstat.nr_anon_pages
993.00 ± 7% +20.4% 1195 ± 4% proc-vmstat.nr_anon_transparent_hugepages
1484488 -1.4% 1464054 proc-vmstat.nr_dirty_background_threshold
2972608 -1.4% 2931689 proc-vmstat.nr_dirty_threshold
14732846 -1.4% 14528187 proc-vmstat.nr_free_pages
2334 ± 4% +11.9% 2611 ± 3% proc-vmstat.nr_page_table_pages
39493 -2.2% 38606 proc-vmstat.nr_slab_unreclaimable
1348499 ± 3% +15.9% 1562687 ± 2% proc-vmstat.nr_zone_active_anon
11707 ± 19% -79.6% 2390 ±105% proc-vmstat.numa_hint_faults
5736 ± 68% -68.8% 1789 ±122% proc-vmstat.numa_hint_faults_local
12846700 -31.1% 8854558 proc-vmstat.numa_hit
834.00 ± 15% -55.0% 375.50 ± 29% proc-vmstat.numa_huge_pte_updates
12829442 -31.1% 8837365 proc-vmstat.numa_local
29698 ± 16% -71.4% 8480 ± 72% proc-vmstat.numa_pages_migrated
464744 ± 17% -57.4% 197920 ± 31% proc-vmstat.numa_pte_updates
2.591e+09 -33.0% 1.736e+09 proc-vmstat.pgalloc_normal
7958915 -29.7% 5591702 proc-vmstat.pgfault
2.589e+09 -33.0% 1.735e+09 proc-vmstat.pgfree
29698 ± 16% -71.4% 8480 ± 72% proc-vmstat.pgmigrate_success
5041287 -33.0% 3378764 proc-vmstat.thp_deferred_split_page
5044208 -33.0% 3379878 proc-vmstat.thp_fault_alloc
495.50 ± 58% -64.8% 174.50 ± 4% interrupts.35:PCI-MSI.3145732-edge.eth0-TxRx-3
3476 ± 10% -14.1% 2986 interrupts.CPU1.CAL:Function_call_interrupts
40310 ± 8% -50.4% 20001 ± 37% interrupts.CPU1.RES:Rescheduling_interrupts
3458 ± 12% -13.5% 2992 ± 2% interrupts.CPU13.CAL:Function_call_interrupts
495.50 ± 58% -64.8% 174.50 ± 4% interrupts.CPU14.35:PCI-MSI.3145732-edge.eth0-TxRx-3
232.75 ± 37% +199.9% 698.00 ± 59% interrupts.CPU17.RES:Rescheduling_interrupts
372.75 ± 61% +226.2% 1215 ± 41% interrupts.CPU19.RES:Rescheduling_interrupts
3428 ± 10% -12.0% 3016 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
16318 ± 14% -79.4% 3366 ± 18% interrupts.CPU2.RES:Rescheduling_interrupts
112.50 ± 39% +573.1% 757.25 ± 42% interrupts.CPU20.RES:Rescheduling_interrupts
103.75 ± 37% +1322.2% 1475 ± 34% interrupts.CPU21.RES:Rescheduling_interrupts
1046 ± 41% +3220.0% 34735 ± 46% interrupts.CPU22.RES:Rescheduling_interrupts
485.25 ± 36% +3116.5% 15608 ±125% interrupts.CPU23.RES:Rescheduling_interrupts
404.75 ± 48% -81.6% 74.50 ± 55% interrupts.CPU29.RES:Rescheduling_interrupts
12888 ± 17% -73.6% 3399 ± 57% interrupts.CPU3.RES:Rescheduling_interrupts
341.25 ± 47% -78.7% 72.75 ± 77% interrupts.CPU31.RES:Rescheduling_interrupts
290.75 ± 29% -65.9% 99.25 ± 99% interrupts.CPU34.RES:Rescheduling_interrupts
3520 ± 7% -28.8% 2507 ± 30% interrupts.CPU35.CAL:Function_call_interrupts
238.75 ± 50% -75.6% 58.25 ± 35% interrupts.CPU35.RES:Rescheduling_interrupts
285.50 ± 66% -87.3% 36.25 ± 70% interrupts.CPU36.RES:Rescheduling_interrupts
3520 ± 9% -22.8% 2716 ± 16% interrupts.CPU37.CAL:Function_call_interrupts
303.00 ± 55% -81.2% 57.00 ±101% interrupts.CPU38.RES:Rescheduling_interrupts
261.75 ± 68% -83.2% 44.00 ± 81% interrupts.CPU39.RES:Rescheduling_interrupts
9633 ± 7% -79.9% 1935 ± 41% interrupts.CPU4.RES:Rescheduling_interrupts
169.75 ± 47% -71.4% 48.50 ± 67% interrupts.CPU41.RES:Rescheduling_interrupts
230.25 ± 52% -73.4% 61.25 ± 92% interrupts.CPU42.RES:Rescheduling_interrupts
3426 ± 10% -11.4% 3036 interrupts.CPU6.CAL:Function_call_interrupts
4643 ± 20% -77.7% 1036 ± 13% interrupts.CPU6.RES:Rescheduling_interrupts
237.75 ± 49% -79.7% 48.25 ± 82% interrupts.CPU66.RES:Rescheduling_interrupts
231.00 ± 64% -89.0% 25.50 ± 28% interrupts.CPU69.RES:Rescheduling_interrupts
3432 ± 10% -15.0% 2918 ± 4% interrupts.CPU7.CAL:Function_call_interrupts
4134 ± 13% -64.2% 1481 ± 43% interrupts.CPU7.RES:Rescheduling_interrupts
96.75 ± 51% -80.4% 19.00 ± 46% interrupts.CPU75.RES:Rescheduling_interrupts
3453 ± 11% -12.7% 3015 interrupts.CPU8.CAL:Function_call_interrupts
18.33 ± 5% -13.3 5.08 ± 4% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
19.33 ± 3% -12.0 7.38 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
19.70 ± 3% -11.9 7.76 ± 7% perf-profile.calltrace.cycles-pp.secondary_startup_64
19.40 ± 3% -11.9 7.50 ± 4% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
19.40 ± 3% -11.9 7.50 ± 4% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
19.40 ± 3% -11.9 7.50 ± 4% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
0.55 ± 4% +0.1 0.64 ± 2% perf-profile.calltrace.cycles-pp.clear_huge_page
0.95 +0.1 1.06 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.56 ± 4% +0.3 0.84 ± 8% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.clear_page_erms.clear_huge_page
0.62 ± 3% +0.3 0.94 ± 8% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page
0.60 ± 4% +0.3 0.93 perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.63 ± 3% +0.3 0.96 ± 8% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
1.17 ± 3% +0.3 1.50 ± 2% perf-profile.calltrace.cycles-pp.__free_pages_ok.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas
0.38 ± 57% +0.4 0.76 ± 9% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.clear_page_erms
1.24 ± 3% +0.4 1.65 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap
1.24 ± 2% +0.4 1.66 perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap.mmput
1.31 ± 2% +0.5 1.79 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.31 ± 2% +0.5 1.78 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
1.34 ± 2% +0.5 1.81 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.34 ± 2% +0.5 1.81 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
1.35 ± 3% +0.5 1.83 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.35 ± 3% +0.5 1.83 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 3% +0.5 1.82 perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 3% +0.5 1.82 perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 3% +0.5 1.82 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.57 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep
0.00 +0.6 0.64 ± 10% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.7 0.67 ± 10% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
0.00 +0.7 0.68 ± 10% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.80 ± 2% +0.7 2.54 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.80 ± 2% +0.7 2.54 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.00 +0.9 0.88 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
0.00 +0.9 0.90 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
0.66 ± 62% +1.2 1.88 ± 34% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
69.36 +8.7 78.09 perf-profile.calltrace.cycles-pp.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
72.75 +10.1 82.87 perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
74.99 +10.9 85.91 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
74.91 +10.9 85.84 perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
75.01 +10.9 85.95 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
75.10 +11.0 86.06 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
75.10 +11.0 86.06 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
75.11 +11.0 86.08 perf-profile.calltrace.cycles-pp.page_fault
103.91 -34.9% 67.64 ± 3% perf-stat.i.MPKI
1.482e+09 +13.6% 1.684e+09 ± 5% perf-stat.i.branch-instructions
9762787 +14.7% 11201300 ± 2% perf-stat.i.branch-misses
7.38 ± 2% +1.0 8.41 perf-stat.i.cache-miss-rate%
45987730 ± 3% -19.9% 36831773 ± 2% perf-stat.i.cache-misses
6.184e+08 -29.2% 4.378e+08 perf-stat.i.cache-references
1.035e+11 +16.0% 1.2e+11 perf-stat.i.cpu-cycles
35.49 ± 2% -24.7% 26.73 ± 8% perf-stat.i.cpu-migrations
1053398 +19.7% 1260826 perf-stat.i.dTLB-load-misses
1.728e+09 +15.0% 1.987e+09 ± 5% perf-stat.i.dTLB-loads
0.04 ± 3% +0.0 0.04 ± 2% perf-stat.i.dTLB-store-miss-rate%
592773 -13.0% 515699 perf-stat.i.dTLB-store-misses
1.69e+09 -22.2% 1.314e+09 perf-stat.i.dTLB-stores
49.34 ± 2% +24.3 73.67 perf-stat.i.iTLB-load-miss-rate%
462159 +19.0% 550118 perf-stat.i.iTLB-load-misses
481420 ± 3% -56.1% 211333 ± 3% perf-stat.i.iTLB-loads
6.07e+09 +11.9% 6.793e+09 ± 4% perf-stat.i.instructions
13676 -8.6% 12500 ± 4% perf-stat.i.instructions-per-iTLB-miss
25994 -29.2% 18413 perf-stat.i.minor-faults
12.62 ± 5% -3.1 9.52 ± 10% perf-stat.i.node-load-miss-rate%
813594 ± 8% -45.3% 445188 ± 10% perf-stat.i.node-load-misses
6036541 ± 3% -29.9% 4234277 perf-stat.i.node-loads
2.63 ± 15% -2.0 0.66 ± 32% perf-stat.i.node-store-miss-rate%
488029 ± 37% -68.5% 153814 ± 34% perf-stat.i.node-store-misses
23876394 -3.5% 23039440 perf-stat.i.node-stores
25995 -29.2% 18414 perf-stat.i.page-faults
101.89 -36.6% 64.58 ± 3% perf-stat.overall.MPKI
7.44 +1.0 8.41 perf-stat.overall.cache-miss-rate%
2251 +44.8% 3258 perf-stat.overall.cycles-between-cache-misses
0.04 +0.0 0.04 ± 2% perf-stat.overall.dTLB-store-miss-rate%
48.98 ± 2% +23.3 72.25 perf-stat.overall.iTLB-load-miss-rate%
11.86 ± 4% -2.3 9.51 ± 10% perf-stat.overall.node-load-miss-rate%
1.99 ± 36% -1.3 0.66 ± 33% perf-stat.overall.node-store-miss-rate%
1.477e+09 +13.6% 1.677e+09 ± 5% perf-stat.ps.branch-instructions
9696637 +14.7% 11121372 ± 2% perf-stat.ps.branch-misses
45824683 ± 3% -19.9% 36699436 ± 2% perf-stat.ps.cache-misses
6.162e+08 -29.2% 4.362e+08 perf-stat.ps.cache-references
1.031e+11 +16.0% 1.195e+11 perf-stat.ps.cpu-cycles
35.37 ± 2% -24.7% 26.64 ± 8% perf-stat.ps.cpu-migrations
1049561 +19.7% 1256350 perf-stat.ps.dTLB-load-misses
1.721e+09 +15.0% 1.979e+09 ± 5% perf-stat.ps.dTLB-loads
590625 -13.0% 513863 perf-stat.ps.dTLB-store-misses
1.684e+09 -22.2% 1.31e+09 perf-stat.ps.dTLB-stores
460416 +19.1% 548246 perf-stat.ps.iTLB-load-misses
479698 ± 3% -56.1% 210661 ± 3% perf-stat.ps.iTLB-loads
6.047e+09 +11.9% 6.766e+09 ± 4% perf-stat.ps.instructions
25909 -29.2% 18350 perf-stat.ps.minor-faults
810622 ± 7% -45.3% 443783 ± 10% perf-stat.ps.node-load-misses
6015816 ± 3% -29.9% 4219461 perf-stat.ps.node-loads
486412 ± 37% -68.5% 153252 ± 33% perf-stat.ps.node-store-misses
23792569 -3.5% 22959184 perf-stat.ps.node-stores
25909 -29.2% 18349 perf-stat.ps.page-faults
1.843e+12 +10.8% 2.043e+12 ± 4% perf-stat.total.instructions
40.36 ±173% -100.0% 0.00 ± 5% sched_debug.cfs_rq:/.MIN_vruntime.stddev
49736 +54.0% 76581 ± 9% sched_debug.cfs_rq:/.exec_clock.avg
81364 ± 8% +74.4% 141938 ± 4% sched_debug.cfs_rq:/.exec_clock.max
10337 ± 14% +266.4% 37876 ± 42% sched_debug.cfs_rq:/.exec_clock.stddev
11225 ± 10% +25.9% 14136 ± 13% sched_debug.cfs_rq:/.load.avg
40.36 ±173% -100.0% 0.00 ± 5% sched_debug.cfs_rq:/.max_vruntime.stddev
2067637 +61.7% 3344320 ± 9% sched_debug.cfs_rq:/.min_vruntime.avg
3187109 ± 6% +93.2% 6157097 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
402561 ± 13% +311.1% 1655080 ± 42% sched_debug.cfs_rq:/.min_vruntime.stddev
0.42 ± 8% +26.9% 0.53 ± 4% sched_debug.cfs_rq:/.nr_running.avg
1.53 ± 10% +58.3% 2.42 ± 14% sched_debug.cfs_rq:/.nr_spread_over.avg
1.13 ± 24% +63.7% 1.85 ± 19% sched_debug.cfs_rq:/.nr_spread_over.stddev
3.41 ± 33% -57.7% 1.45 ±111% sched_debug.cfs_rq:/.removed.load_avg.avg
201.45 -58.0% 84.54 ±100% sched_debug.cfs_rq:/.removed.load_avg.max
25.53 ± 17% -57.7% 10.80 ±103% sched_debug.cfs_rq:/.removed.load_avg.stddev
156.58 ± 33% -57.5% 66.54 ±111% sched_debug.cfs_rq:/.removed.runnable_sum.avg
9231 -57.8% 3894 ±100% sched_debug.cfs_rq:/.removed.runnable_sum.max
1170 ± 17% -57.6% 496.50 ±103% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
9.87 ± 2% +27.1% 12.55 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.avg
11202 ± 10% +25.8% 14092 ± 13% sched_debug.cfs_rq:/.runnable_weight.avg
1717275 ± 24% +109.4% 3595584 ± 21% sched_debug.cfs_rq:/.spread0.max
-313871 +414.7% -1615516 sched_debug.cfs_rq:/.spread0.min
402568 ± 13% +311.1% 1655086 ± 42% sched_debug.cfs_rq:/.spread0.stddev
441.98 ± 8% +19.5% 528.20 ± 7% sched_debug.cfs_rq:/.util_avg.avg
152317 +19.8% 182542 sched_debug.cpu.clock.avg
152324 +19.8% 182549 sched_debug.cpu.clock.max
152309 +19.8% 182535 sched_debug.cpu.clock.min
152317 +19.8% 182542 sched_debug.cpu.clock_task.avg
152324 +19.8% 182549 sched_debug.cpu.clock_task.max
152309 +19.8% 182535 sched_debug.cpu.clock_task.min
9.55 +7.5% 10.27 ± 4% sched_debug.cpu.cpu_load[1].avg
89744 +27.7% 114579 sched_debug.cpu.nr_load_updates.avg
105109 ± 2% +46.9% 154390 ± 7% sched_debug.cpu.nr_load_updates.max
7386 ± 13% +245.2% 25495 ± 34% sched_debug.cpu.nr_load_updates.stddev
6176 +32.2% 8164 ± 4% sched_debug.cpu.nr_switches.avg
33527 ± 10% +167.9% 89818 ± 23% sched_debug.cpu.nr_switches.max
5721 ± 7% +107.1% 11850 ± 23% sched_debug.cpu.nr_switches.stddev
6.70 ± 19% +122.0% 14.88 ± 40% sched_debug.cpu.nr_uninterruptible.max
2.62 ± 8% +36.4% 3.58 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
5865 +35.3% 7937 ± 4% sched_debug.cpu.sched_count.avg
45223 ± 23% +103.6% 92061 ± 23% sched_debug.cpu.sched_count.max
6871 ± 10% +83.8% 12631 ± 20% sched_debug.cpu.sched_count.stddev
2580 ± 2% +33.7% 3449 ± 6% sched_debug.cpu.sched_goidle.avg
15649 ± 9% +169.3% 42137 ± 30% sched_debug.cpu.sched_goidle.max
2697 ± 7% +106.9% 5582 ± 28% sched_debug.cpu.sched_goidle.stddev
2494 ± 2% +40.6% 3507 ± 4% sched_debug.cpu.ttwu_count.avg
20565 ± 18% +174.4% 56434 ± 39% sched_debug.cpu.ttwu_count.max
3385 ± 9% +112.6% 7196 ± 31% sched_debug.cpu.ttwu_count.stddev
856.38 +60.8% 1377 ± 3% sched_debug.cpu.ttwu_local.avg
2955 ± 13% +147.0% 7301 ± 28% sched_debug.cpu.ttwu_local.max
550.87 ± 4% +61.0% 886.64 ± 19% sched_debug.cpu.ttwu_local.stddev
152311 +19.8% 182536 sched_debug.cpu_clk
152311 +19.8% 182536 sched_debug.ktime
155210 +19.5% 185434 sched_debug.sched_clk
30209 ± 6% -41.7% 17601 ± 15% softirqs.CPU1.SCHED
94445 ± 9% +28.2% 121092 ± 5% softirqs.CPU1.TIMER
8720 ± 16% -30.8% 6038 ± 15% softirqs.CPU10.SCHED
91259 ± 7% +45.3% 132597 ± 12% softirqs.CPU10.TIMER
9600 ± 10% -37.6% 5988 ± 24% softirqs.CPU11.SCHED
93321 ± 6% +40.7% 131338 ± 14% softirqs.CPU11.TIMER
93395 ± 8% +39.0% 129851 ± 12% softirqs.CPU12.TIMER
9114 ± 11% -34.2% 6001 ± 13% softirqs.CPU13.SCHED
92702 ± 8% +44.7% 134175 ± 10% softirqs.CPU13.TIMER
98912 ± 8% +31.0% 129598 ± 11% softirqs.CPU14.TIMER
91279 ± 7% +35.5% 123650 ± 8% softirqs.CPU15.TIMER
3119 ± 6% +152.6% 7878 ± 73% softirqs.CPU16.RCU
94832 ± 7% +55.9% 147813 ± 9% softirqs.CPU16.TIMER
94741 ± 7% +38.2% 130953 ± 13% softirqs.CPU17.TIMER
89710 ± 7% +48.8% 133512 ± 12% softirqs.CPU18.TIMER
95107 ± 7% +41.2% 134251 ± 11% softirqs.CPU19.TIMER
17569 ± 3% -51.5% 8516 ± 17% softirqs.CPU2.SCHED
101951 ± 8% +34.6% 137209 ± 7% softirqs.CPU2.TIMER
95682 ± 5% +43.4% 137183 ± 8% softirqs.CPU20.TIMER
90560 ± 18% +47.5% 133620 ± 10% softirqs.CPU21.TIMER
8858 ± 9% +171.4% 24041 ± 37% softirqs.CPU22.SCHED
82796 ± 11% -32.9% 55594 ± 22% softirqs.CPU22.TIMER
92560 ± 8% -30.4% 64376 ± 14% softirqs.CPU23.TIMER
89890 ± 11% -31.9% 61223 ± 18% softirqs.CPU25.TIMER
82728 ± 11% -28.7% 58954 ± 21% softirqs.CPU26.TIMER
7585 ± 5% +65.0% 12517 ± 50% softirqs.CPU27.SCHED
87967 ± 7% -22.7% 68014 ± 20% softirqs.CPU29.TIMER
3847 ± 11% +78.0% 6847 ± 48% softirqs.CPU3.RCU
15118 ± 6% -51.1% 7400 ± 19% softirqs.CPU3.SCHED
99241 ± 8% +33.0% 131954 ± 13% softirqs.CPU3.TIMER
5273 ± 70% -52.9% 2481 ± 13% softirqs.CPU30.RCU
86738 ± 4% -26.9% 63403 ± 20% softirqs.CPU30.TIMER
87717 ± 4% -19.7% 70426 ± 11% softirqs.CPU33.TIMER
91009 ± 8% -26.6% 66826 ± 30% softirqs.CPU37.TIMER
91238 ± 6% -34.6% 59629 ± 35% softirqs.CPU39.TIMER
13630 ± 6% -48.8% 6975 ± 14% softirqs.CPU4.SCHED
98325 ± 9% +36.5% 134220 ± 10% softirqs.CPU4.TIMER
7882 ± 58% -37.7% 4910 ± 76% softirqs.CPU40.RCU
91734 ± 7% -41.1% 54044 ± 24% softirqs.CPU40.TIMER
7837 ± 57% -64.0% 2820 ± 18% softirqs.CPU42.RCU
90928 ± 7% -29.9% 63731 ± 23% softirqs.CPU42.TIMER
92238 ± 5% -36.2% 58803 ± 27% softirqs.CPU43.TIMER
9727 ± 9% -39.0% 5931 ± 17% softirqs.CPU45.SCHED
95857 ± 5% +35.5% 129846 ± 10% softirqs.CPU45.TIMER
9358 ± 8% -36.5% 5941 ± 21% softirqs.CPU46.SCHED
92711 ± 10% +43.7% 133268 ± 10% softirqs.CPU46.TIMER
9920 ± 9% -37.9% 6161 ± 22% softirqs.CPU47.SCHED
97785 ± 9% +36.4% 133427 ± 9% softirqs.CPU47.TIMER
9119 ± 13% -31.4% 6255 ± 18% softirqs.CPU48.SCHED
96161 ± 12% +39.8% 134472 ± 9% softirqs.CPU48.TIMER
95489 ± 12% +36.7% 130507 ± 10% softirqs.CPU49.TIMER
95172 ± 8% +34.9% 128353 ± 5% softirqs.CPU5.TIMER
95302 ± 9% +39.9% 133360 ± 11% softirqs.CPU50.TIMER
93574 ± 13% +47.3% 137842 ± 8% softirqs.CPU51.TIMER
3339 ± 6% +102.3% 6754 ± 50% softirqs.CPU52.RCU
96278 ± 5% +35.0% 129973 ± 10% softirqs.CPU53.TIMER
3243 ± 11% +95.5% 6339 ± 60% softirqs.CPU54.RCU
7818 ± 22% -35.3% 5060 ± 11% softirqs.CPU54.SCHED
97165 ± 4% +35.0% 131219 ± 12% softirqs.CPU54.TIMER
99534 ± 8% +33.4% 132761 ± 11% softirqs.CPU55.TIMER
93342 ± 6% +37.6% 128456 ± 13% softirqs.CPU56.TIMER
3104 ± 5% +176.2% 8574 ± 51% softirqs.CPU57.RCU
8598 ± 19% -43.3% 4871 ± 17% softirqs.CPU57.SCHED
96358 ± 5% +34.3% 129405 ± 14% softirqs.CPU57.TIMER
90709 ± 6% +45.1% 131643 ± 10% softirqs.CPU58.TIMER
94807 ± 6% +31.0% 124229 ± 10% softirqs.CPU59.TIMER
10284 ± 8% -34.9% 6694 ± 12% softirqs.CPU6.SCHED
94038 ± 11% +44.9% 136217 ± 10% softirqs.CPU6.TIMER
95144 ± 5% +37.2% 130541 ± 13% softirqs.CPU60.TIMER
95772 ± 8% +33.9% 128220 ± 13% softirqs.CPU61.TIMER
96450 ± 8% +34.0% 129241 ± 13% softirqs.CPU62.TIMER
2935 ± 8% +104.4% 5999 ± 58% softirqs.CPU63.RCU
93061 ± 7% +43.2% 133226 ± 9% softirqs.CPU63.TIMER
95785 ± 10% +38.0% 132228 ± 10% softirqs.CPU64.TIMER
87927 ± 20% +51.6% 133269 ± 8% softirqs.CPU65.TIMER
87499 ± 10% -38.1% 54161 ± 26% softirqs.CPU66.TIMER
91805 ± 9% -29.9% 64360 ± 25% softirqs.CPU67.TIMER
92714 ± 8% -29.6% 65310 ± 23% softirqs.CPU68.TIMER
87340 ± 7% -31.8% 59600 ± 28% softirqs.CPU69.TIMER
3716 ± 5% +86.8% 6942 ± 61% softirqs.CPU7.RCU
10113 ± 11% -41.6% 5907 ± 9% softirqs.CPU7.SCHED
92161 ± 9% +43.3% 132028 ± 11% softirqs.CPU7.TIMER
91161 ± 17% -27.6% 66000 ± 24% softirqs.CPU73.TIMER
85233 ± 9% -19.1% 68973 ± 14% softirqs.CPU75.TIMER
94828 ± 2% -32.7% 63848 ± 22% softirqs.CPU77.TIMER
89287 ± 7% -29.9% 62592 ± 27% softirqs.CPU78.TIMER
90874 ± 9% -24.0% 69092 ± 24% softirqs.CPU80.TIMER
7403 ± 58% -68.8% 2307 ± 12% softirqs.CPU82.RCU
90855 ± 10% -27.9% 65495 ± 25% softirqs.CPU82.TIMER
90678 ± 11% -28.8% 64580 ± 12% softirqs.CPU84.TIMER
90188 ± 15% -21.7% 70622 ± 3% softirqs.CPU85.TIMER
7438 ± 57% -37.8% 4626 ± 74% softirqs.CPU86.RCU
89728 ± 12% -35.7% 57665 ± 17% softirqs.CPU86.TIMER
5097 ± 72% -52.3% 2430 ± 5% softirqs.CPU87.RCU
88927 ± 9% -34.6% 58201 ± 25% softirqs.CPU87.TIMER
3488 ± 8% +98.4% 6921 ± 45% softirqs.CPU9.RCU
88673 ± 12% +43.7% 127443 ± 12% softirqs.CPU9.TIMER
pft.faults_per_sec_per_cpu
300000 +-+----------------------------------------------------------------+
| |
250000 +-+.++.+.+.+.++.+ +.+.++.+.+.+.+ |
| : : |
| : : |
200000 +-+ : : |
| : : |
150000 O-O O O O O OO O:O:O O OO O O OO O O O OO O O O O OO O O O OO O O
| : : |
100000 +-+ : : |
| : : |
| :: |
50000 +-+ : |
| : |
0 +-+-O--------------------------O-----------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
***************************************************************************************************
lkp-hsw-ep2: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_job/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/3000/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-hsw-ep2/custom/reaim/0x3d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=sched_slice/0x
%stddev %change %stddev
\ | \
389.13 -2.7% 378.77 reaim.child_systime
540687 +1.3% 547673 reaim.jobs_per_min
7509 +1.3% 7606 reaim.jobs_per_min_child
544939 +1.2% 551502 reaim.max_jobs_per_min
23.92 -1.3% 23.62 reaim.parent_time
0.68 ± 2% +21.6% 0.83 reaim.std_dev_percent
0.16 ± 2% +20.1% 0.19 reaim.std_dev_time
311.40 -1.2% 307.71 reaim.time.elapsed_time
311.40 -1.2% 307.71 reaim.time.elapsed_time.max
5096562 -16.0% 4280492 reaim.time.involuntary_context_switches
4694 -2.7% 4569 reaim.time.system_time
21203641 +2.1% 21642097 reaim.time.voluntary_context_switches
169548 +1.9% 172739 vmstat.system.cs
5.805e+08 +14.5% 6.645e+08 cpuidle.C1E.time
5649746 +14.9% 6493131 cpuidle.C1E.usage
114430 ± 4% -6.4% 107089 softirqs.CPU33.TIMER
116034 -8.8% 105833 ± 2% softirqs.CPU35.TIMER
13118 +13.4% 14873 meminfo.PageTables
49287 ± 5% +21.9% 60093 ± 7% meminfo.Shmem
7913 +12.1% 8874 meminfo.max_used_kB
7969 ± 42% +115.6% 17180 ± 5% numa-meminfo.node0.Inactive(anon)
12094 ± 15% +35.1% 16333 ± 3% numa-meminfo.node0.Mapped
10024 ± 34% -82.7% 1739 ± 48% numa-meminfo.node1.Inactive(anon)
6709 ± 5% +30.3% 8745 ± 8% numa-meminfo.node1.PageTables
5649886 +14.9% 6493215 turbostat.C1E
2.58 +0.4 2.98 turbostat.C1E%
0.29 -58.6% 0.12 turbostat.CPU%c3
2.18 ± 2% +10.3% 2.41 ± 2% turbostat.Pkg%pc2
13.25 -3.8% 12.76 turbostat.RAMWatt
1993 ± 42% +115.1% 4288 ± 4% numa-vmstat.node0.nr_inactive_anon
3125 ± 15% +32.9% 4154 ± 2% numa-vmstat.node0.nr_mapped
1993 ± 42% +115.1% 4288 ± 4% numa-vmstat.node0.nr_zone_inactive_anon
2498 ± 34% -82.9% 426.75 ± 47% numa-vmstat.node1.nr_inactive_anon
1670 ± 5% +31.9% 2202 ± 9% numa-vmstat.node1.nr_page_table_pages
2498 ± 34% -82.9% 426.75 ± 47% numa-vmstat.node1.nr_zone_inactive_anon
2027 ± 2% -21.6% 1590 ± 8% slabinfo.eventpoll_epi.active_objs
2027 ± 2% -21.6% 1590 ± 8% slabinfo.eventpoll_epi.num_objs
3548 ± 2% -22.0% 2769 ± 9% slabinfo.eventpoll_pwq.active_objs
3548 ± 2% -22.0% 2769 ± 9% slabinfo.eventpoll_pwq.num_objs
662.25 ± 6% -19.6% 532.50 ± 3% slabinfo.file_lock_cache.active_objs
662.25 ± 6% -19.6% 532.50 ± 3% slabinfo.file_lock_cache.num_objs
933.75 ± 2% -23.5% 714.00 ± 2% slabinfo.names_cache.active_objs
938.50 ± 2% -23.9% 714.00 ± 2% slabinfo.names_cache.num_objs
93741 +3.5% 97068 proc-vmstat.nr_active_anon
84153 +1.7% 85581 proc-vmstat.nr_anon_pages
235621 +1.1% 238317 proc-vmstat.nr_file_pages
4495 +5.3% 4732 proc-vmstat.nr_inactive_anon
15557 +2.7% 15981 proc-vmstat.nr_kernel_stack
6662 +5.9% 7053 proc-vmstat.nr_mapped
3245 +13.7% 3692 proc-vmstat.nr_page_table_pages
12326 ± 5% +21.9% 15022 ± 7% proc-vmstat.nr_shmem
44278 -1.3% 43703 proc-vmstat.nr_slab_unreclaimable
93741 +3.5% 97068 proc-vmstat.nr_zone_active_anon
4495 +5.3% 4732 proc-vmstat.nr_zone_inactive_anon
158417 ± 11% -20.5% 125975 ± 3% proc-vmstat.numa_pte_updates
2340 -100.0% 0.00 sched_debug.cfs_rq:/.load.min
0.74 -17.1% 0.61 ± 3% sched_debug.cfs_rq:/.nr_running.avg
0.17 -100.0% 0.00 sched_debug.cfs_rq:/.nr_running.min
1.83 ± 22% -100.0% 0.00 sched_debug.cfs_rq:/.runnable_load_avg.min
25.56 ± 51% +131.9% 59.28 ± 32% sched_debug.cfs_rq:/.runnable_load_avg.stddev
2339 -100.0% 0.00 sched_debug.cfs_rq:/.runnable_weight.min
741.21 -19.7% 595.16 ± 2% sched_debug.cfs_rq:/.util_avg.avg
144.54 ± 16% +38.7% 200.54 ± 27% sched_debug.cpu.cpu_load[2].max
15.92 ± 13% +33.3% 21.22 ± 18% sched_debug.cpu.cpu_load[3].stddev
3.08 ± 18% -44.6% 1.71 ± 46% sched_debug.cpu.cpu_load[4].min
23537 ± 2% -16.0% 19781 ± 4% sched_debug.cpu.curr->pid.avg
4236 -100.0% 0.00 sched_debug.cpu.curr->pid.min
2340 -100.0% 0.00 sched_debug.cpu.load.min
0.89 ± 2% -15.7% 0.75 ± 5% sched_debug.cpu.nr_running.avg
0.17 -100.0% 0.00 sched_debug.cpu.nr_running.min
128.17 ± 5% +10.5% 141.67 ± 4% sched_debug.cpu.nr_uninterruptible.max
-143.46 -9.9% -129.21 sched_debug.cpu.nr_uninterruptible.min
7407 ± 7% +16.7% 8643 ± 6% sched_debug.cpu.sched_count.stddev
0.00 ± 5% -44.0% 0.00 ± 14% sched_debug.rt_rq:/.rt_time.avg
0.04 ± 7% -42.0% 0.02 ± 57% sched_debug.rt_rq:/.rt_time.max
0.00 ± 8% -36.3% 0.00 ± 40% sched_debug.rt_rq:/.rt_time.stddev
2.198e+10 +1.1% 2.223e+10 perf-stat.i.branch-instructions
2.02 -0.0 1.99 perf-stat.i.branch-miss-rate%
2.44 -0.5 1.89 perf-stat.i.cache-miss-rate%
84674052 -20.1% 67690215 perf-stat.i.cache-misses
3.972e+09 +1.0% 4.013e+09 perf-stat.i.cache-references
171277 +1.8% 174409 perf-stat.i.context-switches
4613 ± 3% +20.8% 5574 ± 3% perf-stat.i.cycles-between-cache-misses
1.248e+10 +1.3% 1.264e+10 perf-stat.i.dTLB-loads
10489856 ± 2% -6.8% 9775230 ± 2% perf-stat.i.dTLB-store-misses
61.17 -0.5 60.69 perf-stat.i.iTLB-load-miss-rate%
1.04e+11 +1.1% 1.052e+11 perf-stat.i.instructions
79.97 -1.9 78.03 perf-stat.i.node-load-miss-rate%
54374000 -26.7% 39866795 perf-stat.i.node-load-misses
5262627 -5.0% 4998608 perf-stat.i.node-loads
47.13 -1.5 45.63 perf-stat.i.node-store-miss-rate%
11256870 -13.1% 9780571 perf-stat.i.node-store-misses
12952272 -6.5% 12106552 perf-stat.i.node-stores
2.13 -0.4 1.69 perf-stat.overall.cache-miss-rate%
1806 +25.2% 2262 perf-stat.overall.cycles-between-cache-misses
0.08 ± 2% -0.0 0.07 ± 3% perf-stat.overall.dTLB-store-miss-rate%
57.16 -0.4 56.72 perf-stat.overall.iTLB-load-miss-rate%
6126 +2.6% 6282 perf-stat.overall.instructions-per-iTLB-miss
0.68 +1.0% 0.69 perf-stat.overall.ipc
91.17 -2.3 88.86 perf-stat.overall.node-load-miss-rate%
46.50 -1.8 44.68 perf-stat.overall.node-store-miss-rate%
2.191e+10 +1.1% 2.215e+10 perf-stat.ps.branch-instructions
84380063 -20.1% 67448069 perf-stat.ps.cache-misses
3.959e+09 +1.0% 4e+09 perf-stat.ps.cache-references
170674 +1.8% 173772 perf-stat.ps.context-switches
1.244e+10 +1.3% 1.259e+10 perf-stat.ps.dTLB-loads
10454233 ± 2% -6.8% 9740884 ± 2% perf-stat.ps.dTLB-store-misses
1.037e+11 +1.1% 1.048e+11 perf-stat.ps.instructions
54182937 -26.7% 39721849 perf-stat.ps.node-load-misses
5244550 -5.0% 4980894 perf-stat.ps.node-loads
11218214 -13.1% 9746115 perf-stat.ps.node-store-misses
12908974 -6.5% 12064855 perf-stat.ps.node-stores
252.25 ± 40% -32.0% 171.50 ± 4% interrupts.40:PCI-MSI.1572869-edge.eth0-TxRx-5
10875 ± 4% -15.8% 9161 ± 8% interrupts.CPU0.RES:Rescheduling_interrupts
7594 ± 2% -23.1% 5838 ± 6% interrupts.CPU1.RES:Rescheduling_interrupts
7393 -35.6% 4762 ± 3% interrupts.CPU10.RES:Rescheduling_interrupts
7422 ± 13% -33.1% 4965 ± 8% interrupts.CPU11.RES:Rescheduling_interrupts
7029 ± 2% -35.2% 4558 ± 2% interrupts.CPU12.RES:Rescheduling_interrupts
7067 ± 2% -36.0% 4526 interrupts.CPU13.RES:Rescheduling_interrupts
7201 ± 2% -34.6% 4709 interrupts.CPU14.RES:Rescheduling_interrupts
7104 ± 2% -36.1% 4537 ± 5% interrupts.CPU15.RES:Rescheduling_interrupts
7208 -35.5% 4646 ± 4% interrupts.CPU16.RES:Rescheduling_interrupts
7127 -33.9% 4709 ± 9% interrupts.CPU17.RES:Rescheduling_interrupts
7395 ± 3% -36.3% 4709 ± 2% interrupts.CPU18.RES:Rescheduling_interrupts
7005 -34.4% 4596 ± 3% interrupts.CPU19.RES:Rescheduling_interrupts
7853 ± 3% -31.2% 5406 ± 8% interrupts.CPU2.RES:Rescheduling_interrupts
7205 -31.2% 4958 ± 8% interrupts.CPU20.RES:Rescheduling_interrupts
7137 -34.3% 4686 ± 3% interrupts.CPU21.RES:Rescheduling_interrupts
6968 -32.7% 4691 interrupts.CPU22.RES:Rescheduling_interrupts
6914 -30.8% 4787 ± 3% interrupts.CPU23.RES:Rescheduling_interrupts
6922 -32.8% 4651 ± 2% interrupts.CPU24.RES:Rescheduling_interrupts
7184 ± 2% -35.8% 4615 ± 2% interrupts.CPU25.RES:Rescheduling_interrupts
7341 ± 2% -36.9% 4635 ± 3% interrupts.CPU26.RES:Rescheduling_interrupts
7230 -34.7% 4720 ± 4% interrupts.CPU27.RES:Rescheduling_interrupts
7205 ± 2% -31.4% 4939 ± 7% interrupts.CPU28.RES:Rescheduling_interrupts
7064 -33.9% 4667 ± 3% interrupts.CPU29.RES:Rescheduling_interrupts
8138 ± 11% -36.3% 5183 ± 6% interrupts.CPU3.RES:Rescheduling_interrupts
7003 -33.7% 4644 ± 2% interrupts.CPU30.RES:Rescheduling_interrupts
7189 ± 3% -37.2% 4512 ± 3% interrupts.CPU31.RES:Rescheduling_interrupts
7236 ± 4% -35.7% 4654 interrupts.CPU32.RES:Rescheduling_interrupts
7071 -35.4% 4571 interrupts.CPU33.RES:Rescheduling_interrupts
7111 ± 2% -36.1% 4540 interrupts.CPU34.RES:Rescheduling_interrupts
7196 ± 2% -33.4% 4791 ± 4% interrupts.CPU35.RES:Rescheduling_interrupts
6758 -34.0% 4462 interrupts.CPU36.RES:Rescheduling_interrupts
6853 -35.7% 4403 ± 4% interrupts.CPU37.RES:Rescheduling_interrupts
7193 ± 3% -34.8% 4691 interrupts.CPU38.RES:Rescheduling_interrupts
7197 ± 5% -35.2% 4660 ± 3% interrupts.CPU39.RES:Rescheduling_interrupts
7753 -31.2% 5333 ± 7% interrupts.CPU4.RES:Rescheduling_interrupts
7031 -33.0% 4711 ± 3% interrupts.CPU40.RES:Rescheduling_interrupts
6888 -35.7% 4431 interrupts.CPU41.RES:Rescheduling_interrupts
6914 ± 2% -32.8% 4644 ± 3% interrupts.CPU42.RES:Rescheduling_interrupts
7186 ± 6% -37.4% 4498 interrupts.CPU43.RES:Rescheduling_interrupts
7018 ± 2% -36.5% 4458 ± 2% interrupts.CPU44.RES:Rescheduling_interrupts
7083 ± 3% -37.4% 4430 ± 2% interrupts.CPU45.RES:Rescheduling_interrupts
7044 ± 3% -35.8% 4521 ± 2% interrupts.CPU46.RES:Rescheduling_interrupts
6916 -34.3% 4546 interrupts.CPU47.RES:Rescheduling_interrupts
7012 ± 3% -36.6% 4443 ± 2% interrupts.CPU48.RES:Rescheduling_interrupts
6966 -35.3% 4506 ± 3% interrupts.CPU49.RES:Rescheduling_interrupts
252.25 ± 40% -32.0% 171.50 ± 4% interrupts.CPU5.40:PCI-MSI.1572869-edge.eth0-TxRx-5
7180 ± 2% -33.7% 4760 ± 3% interrupts.CPU5.RES:Rescheduling_interrupts
6969 -35.1% 4523 interrupts.CPU50.RES:Rescheduling_interrupts
7000 -34.5% 4587 interrupts.CPU51.RES:Rescheduling_interrupts
7039 ± 2% -34.0% 4643 ± 3% interrupts.CPU52.RES:Rescheduling_interrupts
6917 -35.8% 4443 ± 2% interrupts.CPU53.RES:Rescheduling_interrupts
7059 ± 2% -37.2% 4430 ± 2% interrupts.CPU54.RES:Rescheduling_interrupts
6918 -33.8% 4580 ± 2% interrupts.CPU55.RES:Rescheduling_interrupts
7187 -35.1% 4662 ± 3% interrupts.CPU56.RES:Rescheduling_interrupts
6875 -33.5% 4568 interrupts.CPU57.RES:Rescheduling_interrupts
7008 -34.7% 4574 ± 3% interrupts.CPU58.RES:Rescheduling_interrupts
6891 -32.0% 4683 ± 3% interrupts.CPU59.RES:Rescheduling_interrupts
7151 ± 4% -33.2% 4780 ± 2% interrupts.CPU6.RES:Rescheduling_interrupts
6903 -30.9% 4767 ± 7% interrupts.CPU60.RES:Rescheduling_interrupts
7063 -33.4% 4706 ± 3% interrupts.CPU61.RES:Rescheduling_interrupts
6855 ± 3% -33.9% 4532 ± 4% interrupts.CPU62.RES:Rescheduling_interrupts
7054 -35.1% 4578 ± 2% interrupts.CPU63.RES:Rescheduling_interrupts
7195 ± 2% -37.0% 4531 ± 2% interrupts.CPU64.RES:Rescheduling_interrupts
7040 ± 2% -35.3% 4556 ± 4% interrupts.CPU65.RES:Rescheduling_interrupts
6848 -33.7% 4538 ± 2% interrupts.CPU66.RES:Rescheduling_interrupts
7026 ± 2% -35.4% 4538 interrupts.CPU67.RES:Rescheduling_interrupts
7055 -34.2% 4645 ± 2% interrupts.CPU68.RES:Rescheduling_interrupts
6876 -33.4% 4576 interrupts.CPU69.RES:Rescheduling_interrupts
7598 ± 5% -35.5% 4898 ± 2% interrupts.CPU7.RES:Rescheduling_interrupts
7097 ± 2% -35.9% 4548 ± 5% interrupts.CPU70.RES:Rescheduling_interrupts
6982 -35.7% 4488 interrupts.CPU71.RES:Rescheduling_interrupts
7378 ± 2% -36.2% 4705 ± 2% interrupts.CPU8.RES:Rescheduling_interrupts
7349 -34.9% 4784 interrupts.CPU9.RES:Rescheduling_interrupts
83.75 ± 11% +52.5% 127.75 ± 18% interrupts.IWI:IRQ_work_interrupts
516736 -34.1% 340755 interrupts.RES:Rescheduling_interrupts
22.25 -1.3 20.95 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
22.22 -1.3 20.93 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.69 ± 5% -1.1 2.60 ± 3% perf-profile.calltrace.cycles-pp.search_binary_handler.do_execveat_common.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.69 ± 5% -1.1 2.59 ± 3% perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve.do_syscall_64
1.71 ± 4% -0.8 0.88 ± 4% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
1.64 ± 4% -0.8 0.84 ± 4% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
1.02 -0.8 0.27 ±100% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.25 ± 3% -0.6 0.60 ± 9% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
3.74 ± 2% -0.5 3.20 perf-profile.calltrace.cycles-pp.copy_page_range.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.06 ± 13% -0.5 1.56 ± 3% perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve
1.86 ± 3% -0.5 1.36 ± 2% perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.97 ± 12% -0.5 1.51 ± 3% perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common
1.96 ± 12% -0.5 1.50 ± 3% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
1.70 ± 3% -0.4 1.27 ± 2% perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.29 ± 7% -0.4 0.87 ± 16% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.29 ± 7% -0.4 0.87 ± 16% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
4.20 ± 3% -0.4 3.80 ± 3% perf-profile.calltrace.cycles-pp.execve
1.21 ± 6% -0.4 0.85 ± 2% perf-profile.calltrace.cycles-pp.setlocale
3.81 ± 3% -0.4 3.46 ± 3% perf-profile.calltrace.cycles-pp.do_execveat_common.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
3.84 ± 3% -0.4 3.49 ± 3% perf-profile.calltrace.cycles-pp.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
3.84 ± 3% -0.3 3.50 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
3.84 ± 3% -0.3 3.50 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.73 ± 6% -0.2 0.54 perf-profile.calltrace.cycles-pp._dl_addr
0.91 ± 3% -0.2 0.75 ± 4% perf-profile.calltrace.cycles-pp.ret_from_fork
0.89 ± 3% -0.1 0.74 ± 4% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.68 ± 4% -0.1 0.60 ± 5% perf-profile.calltrace.cycles-pp.__pte_alloc.copy_page_range.copy_process._do_fork.do_syscall_64
0.72 ± 3% +0.1 0.79 ± 4% perf-profile.calltrace.cycles-pp.read
0.56 ± 7% +0.1 0.66 perf-profile.calltrace.cycles-pp.do_brk_flags.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
0.54 ± 6% +0.1 0.64 ± 4% perf-profile.calltrace.cycles-pp.kmem_cache_free.unlink_anon_vmas.free_pgtables.exit_mmap.mmput
0.91 ± 3% +0.1 1.03 perf-profile.calltrace.cycles-pp.anon_vma_interval_tree_insert.anon_vma_clone.anon_vma_fork.copy_process._do_fork
0.56 ± 3% +0.1 0.68 ± 4% perf-profile.calltrace.cycles-pp.close
0.70 ± 6% +0.1 0.82 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
0.71 ± 6% +0.1 0.83 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.kill
0.96 ± 6% +0.1 1.11 ± 3% perf-profile.calltrace.cycles-pp.kill
0.83 ± 3% +0.2 0.98 ± 2% perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
0.94 +0.2 1.09 ± 2% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 +0.2 1.22 ± 2% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.54 ± 4% +0.2 0.71 ± 3% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.do_munmap.sys_brk.do_syscall_64
0.54 ± 4% +0.2 0.71 ± 3% perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.unmap_region.do_munmap.sys_brk
1.12 +0.2 1.29 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.10 +0.2 1.27 ± 3% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.13 +0.2 1.31 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.71 ± 6% +0.2 0.91 ± 3% perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.75 ± 5% +0.2 0.96 ± 3% perf-profile.calltrace.cycles-pp.do_munmap.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
1.86 ± 2% +0.2 2.08 ± 3% perf-profile.calltrace.cycles-pp.anon_vma_clone.anon_vma_fork.copy_process._do_fork.do_syscall_64
3.38 ± 3% +0.3 3.65 perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.do_exit.do_group_exit
1.72 ± 2% +0.3 1.98 perf-profile.calltrace.cycles-pp.write
1.93 ± 2% +0.3 2.23 perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.28 ±100% +0.3 0.60 ± 4% perf-profile.calltrace.cycles-pp.do_notify_parent.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
2.17 ± 4% +0.3 2.49 ± 2% perf-profile.calltrace.cycles-pp.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.do_exit
0.26 ±100% +0.3 0.59 ± 4% perf-profile.calltrace.cycles-pp.__slab_free.kmem_cache_free.unlink_anon_vmas.free_pgtables.exit_mmap
1.44 ± 6% +0.3 1.78 perf-profile.calltrace.cycles-pp.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
1.47 ± 6% +0.3 1.81 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
2.81 ± 3% +0.4 3.16 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
1.48 ± 6% +0.4 1.84 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
2.75 +0.4 3.10 ± 3% perf-profile.calltrace.cycles-pp.anon_vma_fork.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.71 ± 4% +0.4 2.09 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.92 ± 2% +0.4 6.30 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.73 ± 4% +0.4 2.11 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
1.86 ± 4% +0.4 2.25 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
5.77 +0.4 6.16 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
1.89 ± 4% +0.4 2.28 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
0.13 ±173% +0.4 0.53 ± 2% perf-profile.calltrace.cycles-pp.remove_vma.exit_mmap.mmput.do_exit.do_group_exit
1.89 ± 4% +0.4 2.30 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat
0.27 ±100% +0.4 0.69 perf-profile.calltrace.cycles-pp.wait_consider_task.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64
1.97 ± 6% +0.4 2.40 perf-profile.calltrace.cycles-pp.brk
2.36 ± 4% +0.5 2.81 perf-profile.calltrace.cycles-pp.creat
0.13 ±173% +0.5 0.60 ± 8% perf-profile.calltrace.cycles-pp.down_write.anon_vma_fork.copy_process._do_fork.do_syscall_64
0.67 ± 4% +0.5 1.16 ± 4% perf-profile.calltrace.cycles-pp.queued_read_lock_slowpath.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64
3.65 ± 2% +0.5 4.15 perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput
0.13 ±173% +0.5 0.63 perf-profile.calltrace.cycles-pp.release_task.wait_consider_task.do_wait.kernel_wait4.SYSC_wait4
3.68 ± 2% +0.5 4.20 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
3.67 ± 2% +0.5 4.19 perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
1.44 ± 2% +0.7 2.13 perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.45 ± 2% +0.7 2.15 perf-profile.calltrace.cycles-pp.kernel_wait4.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
1.47 ± 2% +0.7 2.17 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.wait
1.46 ± 2% +0.7 2.17 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
1.45 ± 2% +0.7 2.16 perf-profile.calltrace.cycles-pp.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
1.69 ± 2% +0.7 2.43 perf-profile.calltrace.cycles-pp.wait
11.98 +0.7 12.73 perf-profile.calltrace.cycles-pp.__libc_fork
9.99 ± 2% +0.8 10.77 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_fork
9.99 ± 2% +0.8 10.77 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
9.98 ± 2% +0.8 10.77 perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
9.35 +0.8 10.18 perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
27.80 +0.8 28.64 perf-profile.calltrace.cycles-pp.secondary_startup_64
0.00 +0.9 0.86 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_read_lock_slowpath.do_wait.kernel_wait4.SYSC_wait4
27.36 +0.9 28.29 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
27.36 +0.9 28.29 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
27.35 +0.9 28.29 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
26.38 +1.0 27.42 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.81 +1.2 14.97 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
13.84 +1.2 15.00 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
15.92 +1.3 17.21 perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.92 +1.3 17.21 perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.92 +1.3 17.21 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.25 +1.3 25.56 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
array_size/compiler/cpufreq_governor/kconfig/nr_threads/omp/rootfs/tbox_group/testcase/ucode:
10000000/gcc-7/performance/x86_64-rhel-7.2/50%/true/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/stream/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
70910 ± 3% -32.0% 48230 ± 2% stream.add_bandwidth_MBps
76061 -39.0% 46386 ± 3% stream.copy_bandwidth_MBps
67937 ± 2% -35.7% 43683 ± 2% stream.scale_bandwidth_MBps
52.42 +47.9% 77.52 ± 2% stream.time.user_time
74940 -33.4% 49941 ± 2% stream.triad_bandwidth_MBps
19.95 ± 9% +3.4 23.35 ± 2% mpstat.cpu.usr%
3192799 ± 31% +124.6% 7172184 ± 16% cpuidle.C1E.time
23982 ± 9% +47.3% 35332 ± 28% cpuidle.C1E.usage
91178 ± 8% +43.2% 130560 ± 4% meminfo.AnonHugePages
669014 ± 11% -20.3% 533101 meminfo.max_used_kB
643.25 ± 15% +408.3% 3269 ±129% softirqs.CPU16.RCU
10783 ± 4% +13.6% 12250 ± 10% softirqs.CPU46.TIMER
84.00 -4.2% 80.50 vmstat.cpu.id
15.00 ± 8% +23.3% 18.50 ± 2% vmstat.cpu.us
5888 +3.3% 6081 proc-vmstat.nr_mapped
985.50 ± 2% +7.6% 1060 ± 2% proc-vmstat.nr_page_table_pages
81826 +2.9% 84193 proc-vmstat.numa_hit
64731 +3.6% 67074 proc-vmstat.numa_local
20069 ± 12% +60.3% 32178 ± 32% turbostat.C1E
1.15 ± 31% +1.2 2.33 ± 17% turbostat.C1E%
34.77 ± 4% -24.8% 26.13 ± 2% turbostat.CPU%c1
30.82 ± 16% +25.9% 38.80 ± 9% turbostat.CPU%c6
289644 ± 6% +17.8% 341157 ± 7% turbostat.IRQ
24.77 ± 6% -10.6% 22.14 ± 2% turbostat.RAMWatt
672.00 ± 9% +46.4% 984.00 ± 22% slabinfo.dmaengine-unmap-16.active_objs
672.00 ± 9% +46.4% 984.00 ± 22% slabinfo.dmaengine-unmap-16.num_objs
2128 ± 9% -25.9% 1576 ± 9% slabinfo.eventpoll_epi.active_objs
2128 ± 9% -25.9% 1576 ± 9% slabinfo.eventpoll_epi.num_objs
3724 ± 9% -25.9% 2758 ± 9% slabinfo.eventpoll_pwq.active_objs
3724 ± 9% -25.9% 2758 ± 9% slabinfo.eventpoll_pwq.num_objs
760.00 ± 6% -17.1% 630.00 ± 2% slabinfo.file_lock_cache.active_objs
760.00 ± 6% -17.1% 630.00 ± 2% slabinfo.file_lock_cache.num_objs
7771 ± 60% +112.3% 16501 numa-meminfo.node0.Inactive(anon)
8035 ± 58% +111.6% 17005 ± 2% numa-meminfo.node0.Shmem
93367 ± 2% +12.8% 105291 numa-meminfo.node0.Slab
21669 +8.8% 23569 ± 4% numa-meminfo.node1.Active(file)
9225 ± 51% -94.7% 493.50 ± 62% numa-meminfo.node1.Inactive(anon)
31028 ± 6% -10.9% 27656 numa-meminfo.node1.SReclaimable
62893 ± 4% -13.6% 54313 ± 7% numa-meminfo.node1.SUnreclaim
9625 ± 49% -92.8% 690.00 ± 51% numa-meminfo.node1.Shmem
93922 -12.7% 81969 ± 5% numa-meminfo.node1.Slab
1942 ± 61% +112.4% 4125 numa-vmstat.node0.nr_inactive_anon
2007 ± 59% +111.7% 4250 ± 2% numa-vmstat.node0.nr_shmem
1942 ± 61% +112.4% 4125 numa-vmstat.node0.nr_zone_inactive_anon
5390 +9.3% 5892 ± 4% numa-vmstat.node1.nr_active_file
2305 ± 51% -94.7% 123.25 ± 62% numa-vmstat.node1.nr_inactive_anon
2404 ± 49% -92.8% 172.00 ± 51% numa-vmstat.node1.nr_shmem
15666 ± 4% -13.3% 13579 ± 7% numa-vmstat.node1.nr_slab_unreclaimable
5390 +9.3% 5892 ± 4% numa-vmstat.node1.nr_zone_active_file
2305 ± 51% -94.7% 123.25 ± 62% numa-vmstat.node1.nr_zone_inactive_anon
11.33 ±120% -11.3 0.00 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
20.83 ±103% -8.3 12.50 ±173% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
20.83 ±103% -8.3 12.50 ±173% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
20.83 ±103% -8.3 12.50 ±173% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.33 ±103% -7.8 12.50 ±173% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.33 ±103% -7.8 12.50 ±173% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
12.83 ±108% -0.3 12.50 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
12.83 ±108% -0.3 12.50 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
27016 ± 6% -12.9% 23538 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
2243 ± 40% -47.7% 1173 ± 35% sched_debug.cfs_rq:/.min_vruntime.min
-692.28 +943.1% -7221 sched_debug.cfs_rq:/.spread0.avg
19808 ± 11% -47.9% 10321 ± 61% sched_debug.cfs_rq:/.spread0.max
-5010 +140.7% -12059 sched_debug.cfs_rq:/.spread0.min
2.18 ± 9% -12.5% 1.91 ± 8% sched_debug.cpu.cpu_load[0].avg
160.00 ± 22% -42.2% 92.50 ± 25% sched_debug.cpu.cpu_load[4].max
18.25 ± 22% -37.0% 11.50 ± 14% sched_debug.cpu.cpu_load[4].stddev
282.50 ± 11% -16.2% 236.72 ± 6% sched_debug.cpu.curr->pid.avg
642.46 ± 4% -7.3% 595.31 sched_debug.cpu.curr->pid.stddev
229.75 -19.0% 186.00 ± 9% sched_debug.cpu.nr_switches.min
1754 ± 11% +30.2% 2284 ± 8% sched_debug.cpu.nr_switches.stddev
5.25 ± 15% +128.6% 12.00 ± 20% sched_debug.cpu.nr_uninterruptible.max
2.38 ± 13% +50.4% 3.57 ± 19% sched_debug.cpu.nr_uninterruptible.stddev
1.56 ±171% +602.6% 10.93 ± 52% sched_debug.cpu.ttwu_count.avg
9.75 ±161% +2407.7% 244.50 ± 51% sched_debug.cpu.ttwu_count.max
3.53 ±167% +1270.9% 48.37 ± 50% sched_debug.cpu.ttwu_count.stddev
3.14 ± 12% +41.1% 4.43 ± 17% perf-stat.i.cpi
0.32 ± 13% -24.8% 0.24 ± 12% perf-stat.i.ipc
14784 ± 4% -25.8% 10970 ± 16% perf-stat.i.minor-faults
37.43 ± 11% -32.1 5.36 ± 27% perf-stat.i.node-load-miss-rate%
1700226 ± 23% -86.3% 233356 ± 9% perf-stat.i.node-load-misses
2814735 ± 20% +183.3% 7973698 ± 61% perf-stat.i.node-loads
36.40 ± 15% -34.6 1.80 ± 79% perf-stat.i.node-store-miss-rate%
3649361 ± 35% -95.2% 175235 ± 54% perf-stat.i.node-store-misses
14790 ± 4% -25.8% 10971 ± 16% perf-stat.i.page-faults
3.14 ± 12% +45.8% 4.58 ± 20% perf-stat.overall.cpi
0.32 ± 13% -29.6% 0.23 ± 20% perf-stat.overall.ipc
37.43 ± 11% -33.1 4.30 ± 57% perf-stat.overall.node-load-miss-rate%
36.40 ± 15% -34.8 1.58 ± 99% perf-stat.overall.node-store-miss-rate%
7440 ± 4% -15.4% 6296 ± 6% perf-stat.ps.minor-faults
855845 ± 24% -83.9% 138073 ± 21% perf-stat.ps.node-load-misses
1416749 ± 20% +257.9% 5070429 ± 69% perf-stat.ps.node-loads
1837180 ± 35% -94.4% 103194 ± 56% perf-stat.ps.node-store-misses
7443 ± 4% -15.4% 6297 ± 6% perf-stat.ps.page-faults
2645 ± 6% +21.1% 3205 ± 8% interrupts.CPU0.LOC:Local_timer_interrupts
2659 ± 5% +19.9% 3189 ± 8% interrupts.CPU1.LOC:Local_timer_interrupts
2690 ± 4% +21.8% 3278 ± 10% interrupts.CPU10.LOC:Local_timer_interrupts
2649 ± 5% +22.0% 3233 ± 8% interrupts.CPU11.LOC:Local_timer_interrupts
2665 ± 5% +20.0% 3198 ± 8% interrupts.CPU12.LOC:Local_timer_interrupts
2697 ± 7% +18.6% 3200 ± 8% interrupts.CPU13.LOC:Local_timer_interrupts
2668 ± 5% +19.6% 3192 ± 8% interrupts.CPU14.LOC:Local_timer_interrupts
2656 ± 5% +21.3% 3223 ± 7% interrupts.CPU15.LOC:Local_timer_interrupts
2652 ± 5% +20.6% 3199 ± 8% interrupts.CPU16.LOC:Local_timer_interrupts
2680 ± 4% +20.4% 3227 ± 8% interrupts.CPU17.LOC:Local_timer_interrupts
2653 ± 5% +21.5% 3224 ± 9% interrupts.CPU18.LOC:Local_timer_interrupts
2685 ± 5% +20.2% 3228 ± 8% interrupts.CPU19.LOC:Local_timer_interrupts
2676 ± 5% +20.7% 3229 ± 9% interrupts.CPU2.LOC:Local_timer_interrupts
2665 ± 5% +20.6% 3215 ± 9% interrupts.CPU20.LOC:Local_timer_interrupts
2681 ± 5% +19.3% 3198 ± 8% interrupts.CPU21.LOC:Local_timer_interrupts
2653 ± 5% +20.8% 3206 ± 8% interrupts.CPU22.LOC:Local_timer_interrupts
2643 ± 6% +22.8% 3245 ± 8% interrupts.CPU23.LOC:Local_timer_interrupts
2654 ± 5% +21.7% 3231 ± 7% interrupts.CPU24.LOC:Local_timer_interrupts
2649 ± 5% +21.8% 3226 ± 7% interrupts.CPU25.LOC:Local_timer_interrupts
2707 ± 7% +20.0% 3248 ± 7% interrupts.CPU26.LOC:Local_timer_interrupts
2701 ± 6% +20.2% 3247 ± 8% interrupts.CPU27.LOC:Local_timer_interrupts
2701 ± 6% +18.8% 3208 ± 8% interrupts.CPU28.LOC:Local_timer_interrupts
2702 ± 5% +18.4% 3201 ± 8% interrupts.CPU29.LOC:Local_timer_interrupts
2688 ± 6% +18.8% 3193 ± 8% interrupts.CPU30.LOC:Local_timer_interrupts
2641 ± 6% +21.5% 3208 ± 8% interrupts.CPU31.LOC:Local_timer_interrupts
2666 ± 4% +20.4% 3210 ± 8% interrupts.CPU32.LOC:Local_timer_interrupts
2639 ± 5% +21.4% 3204 ± 8% interrupts.CPU33.LOC:Local_timer_interrupts
2656 ± 5% +22.2% 3246 ± 8% interrupts.CPU34.LOC:Local_timer_interrupts
2664 ± 5% +21.8% 3245 ± 8% interrupts.CPU35.LOC:Local_timer_interrupts
2680 ± 6% +20.9% 3241 ± 8% interrupts.CPU36.LOC:Local_timer_interrupts
2650 ± 6% +22.4% 3245 ± 7% interrupts.CPU37.LOC:Local_timer_interrupts
2653 ± 6% +20.8% 3206 ± 8% interrupts.CPU38.LOC:Local_timer_interrupts
2653 ± 5% +21.6% 3226 ± 8% interrupts.CPU39.LOC:Local_timer_interrupts
2650 ± 5% +23.6% 3276 ± 6% interrupts.CPU4.LOC:Local_timer_interrupts
2679 ± 6% +19.6% 3204 ± 8% interrupts.CPU40.LOC:Local_timer_interrupts
2649 ± 5% +20.9% 3204 ± 8% interrupts.CPU41.LOC:Local_timer_interrupts
2666 ± 6% +20.2% 3204 ± 8% interrupts.CPU42.LOC:Local_timer_interrupts
2699 ± 6% +18.9% 3209 ± 8% interrupts.CPU43.LOC:Local_timer_interrupts
2685 ± 8% +21.2% 3254 ± 9% interrupts.CPU44.LOC:Local_timer_interrupts
2718 ± 8% +18.9% 3233 ± 8% interrupts.CPU45.LOC:Local_timer_interrupts
2644 ± 6% +21.0% 3200 ± 9% interrupts.CPU46.LOC:Local_timer_interrupts
2672 ± 5% +19.8% 3201 ± 9% interrupts.CPU47.LOC:Local_timer_interrupts
2662 ± 5% +21.0% 3220 ± 9% interrupts.CPU48.LOC:Local_timer_interrupts
2676 ± 5% +20.2% 3216 ± 8% interrupts.CPU49.LOC:Local_timer_interrupts
2655 ± 5% +20.5% 3200 ± 8% interrupts.CPU5.LOC:Local_timer_interrupts
623.75 +14.3% 712.75 ± 11% interrupts.CPU50.CAL:Function_call_interrupts
2672 ± 6% +19.4% 3191 ± 8% interrupts.CPU50.LOC:Local_timer_interrupts
2653 ± 5% +20.4% 3194 ± 8% interrupts.CPU51.LOC:Local_timer_interrupts
2640 ± 6% +20.9% 3193 ± 8% interrupts.CPU52.LOC:Local_timer_interrupts
2665 ± 6% +19.9% 3196 ± 8% interrupts.CPU53.LOC:Local_timer_interrupts
2657 ± 5% +20.0% 3189 ± 8% interrupts.CPU54.LOC:Local_timer_interrupts
2653 ± 5% +21.1% 3212 ± 9% interrupts.CPU55.LOC:Local_timer_interrupts
2659 ± 5% +20.0% 3192 ± 8% interrupts.CPU56.LOC:Local_timer_interrupts
208.75 ± 59% -84.7% 32.00 ±173% interrupts.CPU56.TLB:TLB_shootdowns
2657 ± 5% +20.2% 3194 ± 8% interrupts.CPU57.LOC:Local_timer_interrupts
2638 ± 6% +21.0% 3192 ± 8% interrupts.CPU58.LOC:Local_timer_interrupts
2667 ± 7% +19.7% 3194 ± 8% interrupts.CPU59.LOC:Local_timer_interrupts
2649 ± 5% +21.0% 3205 ± 8% interrupts.CPU6.LOC:Local_timer_interrupts
236.00 ± 66% -86.2% 32.50 ±169% interrupts.CPU6.TLB:TLB_shootdowns
2677 ± 6% +19.0% 3186 ± 8% interrupts.CPU60.LOC:Local_timer_interrupts
2674 ± 6% +19.3% 3191 ± 8% interrupts.CPU61.LOC:Local_timer_interrupts
2658 ± 5% +20.2% 3194 ± 8% interrupts.CPU62.LOC:Local_timer_interrupts
2666 ± 5% +20.8% 3221 ± 9% interrupts.CPU63.LOC:Local_timer_interrupts
2690 ± 6% +19.9% 3225 ± 10% interrupts.CPU64.LOC:Local_timer_interrupts
2653 ± 5% +20.4% 3194 ± 8% interrupts.CPU65.LOC:Local_timer_interrupts
652.50 ± 9% +21.9% 795.25 ± 18% interrupts.CPU66.CAL:Function_call_interrupts
2669 ± 5% +19.7% 3196 ± 8% interrupts.CPU66.LOC:Local_timer_interrupts
2655 ± 5% +20.2% 3191 ± 8% interrupts.CPU67.LOC:Local_timer_interrupts
2665 ± 5% +20.4% 3207 ± 8% interrupts.CPU68.LOC:Local_timer_interrupts
614.50 +27.0% 780.50 ± 25% interrupts.CPU69.CAL:Function_call_interrupts
2666 ± 5% +21.2% 3232 ± 9% interrupts.CPU69.LOC:Local_timer_interrupts
2678 ± 6% +21.1% 3243 ± 7% interrupts.CPU7.LOC:Local_timer_interrupts
657.50 ± 9% +24.1% 815.75 ± 21% interrupts.CPU70.CAL:Function_call_interrupts
2676 ± 6% +19.9% 3208 ± 8% interrupts.CPU70.LOC:Local_timer_interrupts
657.00 ± 9% +24.2% 815.75 ± 21% interrupts.CPU71.CAL:Function_call_interrupts
2674 ± 6% +19.5% 3194 ± 8% interrupts.CPU71.LOC:Local_timer_interrupts
2653 ± 5% +21.4% 3221 ± 9% interrupts.CPU72.LOC:Local_timer_interrupts
2670 ± 6% +20.0% 3204 ± 8% interrupts.CPU73.LOC:Local_timer_interrupts
2669 ± 6% +19.8% 3197 ± 9% interrupts.CPU74.LOC:Local_timer_interrupts
652.00 ± 9% +25.3% 816.75 ± 21% interrupts.CPU75.CAL:Function_call_interrupts
2641 ± 5% +21.4% 3205 ± 8% interrupts.CPU75.LOC:Local_timer_interrupts
625.00 +25.8% 786.25 ± 16% interrupts.CPU76.CAL:Function_call_interrupts
2660 ± 5% +20.6% 3208 ± 8% interrupts.CPU76.LOC:Local_timer_interrupts
656.75 ± 9% +20.6% 791.75 ± 17% interrupts.CPU77.CAL:Function_call_interrupts
2665 ± 5% +19.6% 3187 ± 8% interrupts.CPU77.LOC:Local_timer_interrupts
657.00 ± 9% +21.3% 796.75 ± 18% interrupts.CPU78.CAL:Function_call_interrupts
2666 ± 5% +19.7% 3190 ± 8% interrupts.CPU78.LOC:Local_timer_interrupts
2660 ± 5% +19.9% 3190 ± 8% interrupts.CPU79.LOC:Local_timer_interrupts
648.75 ± 7% +19.2% 773.00 ± 16% interrupts.CPU8.CAL:Function_call_interrupts
2647 ± 5% +21.3% 3211 ± 7% interrupts.CPU8.LOC:Local_timer_interrupts
2671 ± 6% +19.5% 3191 ± 8% interrupts.CPU80.LOC:Local_timer_interrupts
2660 ± 5% +20.9% 3217 ± 8% interrupts.CPU81.LOC:Local_timer_interrupts
2639 ± 6% +21.6% 3209 ± 8% interrupts.CPU82.LOC:Local_timer_interrupts
2652 ± 5% +21.5% 3223 ± 7% interrupts.CPU83.LOC:Local_timer_interrupts
2656 ± 5% +20.5% 3201 ± 8% interrupts.CPU84.LOC:Local_timer_interrupts
2678 ± 4% +19.8% 3207 ± 8% interrupts.CPU85.LOC:Local_timer_interrupts
2659 ± 5% +20.7% 3209 ± 8% interrupts.CPU86.LOC:Local_timer_interrupts
656.25 ± 9% +24.8% 819.00 ± 22% interrupts.CPU87.CAL:Function_call_interrupts
2656 ± 5% +20.5% 3200 ± 8% interrupts.CPU87.LOC:Local_timer_interrupts
2655 ± 5% +20.9% 3211 ± 8% interrupts.CPU9.LOC:Local_timer_interrupts
234559 ± 5% +20.5% 282643 ± 8% interrupts.LOC:Local_timer_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep6/plzip/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
4674851 -5.8% 4402603 plzip.time.minor_page_faults
187.93 -1.7% 184.66 plzip.time.system_time
236915 +3.3% 244744 plzip.time.voluntary_context_switches
20188778 ± 7% +62.1% 32719513 ± 32% cpuidle.POLL.time
0.00 -0.0 0.00 ± 24% mpstat.cpu.soft%
11424 ± 49% -62.2% 4321 ±169% numa-numastat.node0.other_node
2223 -2.7% 2163 vmstat.system.cs
199507 -6.3% 186973 vmstat.system.in
11254 +22.5% 13784 ± 16% numa-meminfo.node0.Mapped
75.00 +544.0% 483.00 ± 81% numa-meminfo.node0.Mlocked
75.00 +544.0% 483.00 ± 81% numa-meminfo.node0.Unevictable
38772 ± 5% -11.6% 34263 ± 10% numa-meminfo.node1.SReclaimable
2793 +25.4% 3502 ± 14% numa-vmstat.node0.nr_mapped
18.50 ± 2% +550.0% 120.25 ± 82% numa-vmstat.node0.nr_mlock
18.50 ± 2% +550.0% 120.25 ± 82% numa-vmstat.node0.nr_unevictable
18.50 ± 2% +550.0% 120.25 ± 82% numa-vmstat.node0.nr_zone_unevictable
11543 ± 49% -60.8% 4519 ±159% numa-vmstat.node0.numa_other
9693 ± 5% -11.6% 8566 ± 10% numa-vmstat.node1.nr_slab_reclaimable
3008454 -9.0% 2737405 proc-vmstat.numa_hint_faults
2421331 -7.5% 2240371 proc-vmstat.numa_hint_faults_local
185494 -11.9% 163404 proc-vmstat.numa_huge_pte_updates
5591805 ± 3% -12.4% 4899638 ± 3% proc-vmstat.numa_pages_migrated
98610869 -11.8% 86933972 proc-vmstat.numa_pte_updates
5570737 -4.9% 5295155 proc-vmstat.pgfault
5591805 ± 3% -12.4% 4899638 ± 3% proc-vmstat.pgmigrate_success
2358 ± 3% -24.9% 1770 ± 14% slabinfo.eventpoll_epi.active_objs
2358 ± 3% -24.9% 1770 ± 14% slabinfo.eventpoll_epi.num_objs
4127 ± 3% -25.6% 3071 ± 15% slabinfo.eventpoll_pwq.active_objs
4127 ± 3% -25.6% 3071 ± 15% slabinfo.eventpoll_pwq.num_objs
767.50 ± 5% -16.6% 640.00 ± 10% slabinfo.file_lock_cache.active_objs
767.50 ± 5% -16.6% 640.00 ± 10% slabinfo.file_lock_cache.num_objs
548.00 ± 12% -13.4% 474.75 ± 17% slabinfo.skbuff_fclone_cache.active_objs
548.00 ± 12% -13.4% 474.75 ± 17% slabinfo.skbuff_fclone_cache.num_objs
3424 -13.7% 2955 ± 8% slabinfo.sock_inode_cache.active_objs
3424 -13.7% 2955 ± 8% slabinfo.sock_inode_cache.num_objs
1.38 +8.7% 1.50 ± 6% sched_debug.cfs_rq:/.nr_spread_over.avg
123441 ± 32% -87.5% 15420 ± 31% sched_debug.cfs_rq:/.spread0.avg
310956 ± 19% -40.1% 186343 ± 7% sched_debug.cfs_rq:/.spread0.max
29.67 ± 5% -7.4% 27.46 ± 5% sched_debug.cpu.cpu_load[1].max
53.50 ± 8% -16.1% 44.88 ± 16% sched_debug.cpu.cpu_load[4].max
4.96 ± 7% -16.0% 4.17 ± 17% sched_debug.cpu.cpu_load[4].stddev
3942 ± 15% -36.1% 2519 ± 42% sched_debug.cpu.curr->pid.min
312.17 ± 4% -12.6% 272.83 ± 4% sched_debug.cpu.sched_goidle.stddev
5485 +31.8% 7231 ± 15% sched_debug.cpu.ttwu_count.max
829.33 ± 6% -10.5% 742.58 ± 12% sched_debug.cpu.ttwu_count.min
3789 ± 3% +38.5% 5249 ± 18% sched_debug.cpu.ttwu_local.max
361.67 ± 16% -24.7% 272.42 ± 9% sched_debug.cpu.ttwu_local.min
42735 ± 4% -9.5% 38655 ± 4% softirqs.CPU1.RCU
43986 -11.1% 39106 ± 6% softirqs.CPU11.RCU
138451 ± 3% -6.2% 129877 ± 3% softirqs.CPU11.TIMER
43616 ± 3% -11.3% 38683 ± 8% softirqs.CPU14.RCU
46391 ± 3% -15.0% 39420 ± 4% softirqs.CPU2.RCU
46679 ± 2% -10.4% 41843 ± 2% softirqs.CPU35.RCU
54405 ± 9% -13.0% 47347 ± 11% softirqs.CPU36.RCU
53662 ± 5% -16.5% 44815 ± 5% softirqs.CPU37.RCU
51563 ± 11% -17.6% 42505 ± 5% softirqs.CPU39.RCU
146287 ± 3% -7.7% 135015 ± 3% softirqs.CPU4.TIMER
51810 ± 8% -12.9% 45103 ± 11% softirqs.CPU41.RCU
144965 ± 3% -7.2% 134488 ± 2% softirqs.CPU5.TIMER
36399 ± 2% +19.8% 43611 ± 2% softirqs.CPU66.RCU
54540 -17.2% 45175 ± 2% softirqs.CPU68.RCU
45027 -10.8% 40167 ± 6% softirqs.CPU7.RCU
46747 ± 10% -20.2% 37293 ± 10% softirqs.CPU77.RCU
42610 ± 2% -10.8% 38011 ± 7% softirqs.CPU83.RCU
43534 ± 3% -10.6% 38935 ± 9% softirqs.CPU87.RCU
45144 ± 4% -8.2% 41426 ± 5% softirqs.CPU9.RCU
2207 -2.7% 2149 perf-stat.i.context-switches
132.50 -6.8% 123.43 ± 2% perf-stat.i.cpu-migrations
0.04 -0.0 0.03 ± 2% perf-stat.i.dTLB-store-miss-rate%
2870039 -7.1% 2665015 perf-stat.i.dTLB-store-misses
69.91 +2.8 72.76 ± 2% perf-stat.i.iTLB-load-miss-rate%
593399 -13.4% 513752 ± 3% perf-stat.i.iTLB-loads
15830 -5.4% 14979 perf-stat.i.minor-faults
5.76 ± 6% -1.1 4.63 ± 12% perf-stat.i.node-load-miss-rate%
29280168 ± 5% -22.1% 22809166 ± 7% perf-stat.i.node-load-misses
5.96 ± 8% -1.4 4.52 ± 11% perf-stat.i.node-store-miss-rate%
4992121 ± 7% -27.5% 3621719 ± 7% perf-stat.i.node-store-misses
15830 -5.4% 14979 perf-stat.i.page-faults
0.04 -0.0 0.03 perf-stat.overall.dTLB-store-miss-rate%
5.46 ± 5% -1.2 4.24 ± 7% perf-stat.overall.node-load-miss-rate%
5.68 ± 7% -1.6 4.12 ± 8% perf-stat.overall.node-store-miss-rate%
2201 -2.6% 2143 perf-stat.ps.context-switches
132.11 -6.8% 123.07 ± 2% perf-stat.ps.cpu-migrations
2861551 -7.1% 2657323 perf-stat.ps.dTLB-store-misses
591695 -13.4% 512284 ± 3% perf-stat.ps.iTLB-loads
15785 -5.4% 14937 perf-stat.ps.minor-faults
29194092 ± 5% -22.1% 22742509 ± 7% perf-stat.ps.node-load-misses
4977398 ± 7% -27.4% 3611184 ± 7% perf-stat.ps.node-store-misses
15785 -5.4% 14937 perf-stat.ps.page-faults
37.67 -3.5 34.22 ± 3% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
36.20 -3.2 32.98 ± 3% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
30.64 -2.6 28.08 ± 3% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
15.44 ± 8% -2.2 13.23 ± 5% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
16.59 ± 7% -2.2 14.40 ± 4% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
14.94 ± 8% -2.1 12.84 ± 5% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
23.31 ± 2% -2.1 21.23 ± 3% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
11.29 ± 5% -1.7 9.60 ± 5% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
8.27 ± 2% -1.0 7.23 ± 4% perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
12.36 -0.6 11.71 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
12.36 -0.6 11.71 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.88 -0.6 3.31 ± 4% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.86 ± 2% -0.6 3.30 ± 4% perf-profile.calltrace.cycles-pp.pipe_write.__vfs_write.vfs_write.sys_write.do_syscall_64
3.88 -0.6 3.32 ± 4% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.86 ± 2% -0.6 3.31 ± 3% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.08 -0.5 3.56 ± 2% perf-profile.calltrace.cycles-pp.update_load_avg.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
2.81 ± 2% -0.4 2.39 ± 7% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.__vfs_write.vfs_write.sys_write
2.62 ± 4% -0.4 2.21 ± 6% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter.pipe_write.__vfs_write
2.62 ± 4% -0.4 2.21 ± 6% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter.pipe_write.__vfs_write.vfs_write
0.87 ± 6% -0.4 0.47 ± 57% perf-profile.calltrace.cycles-pp.native_apic_msr_eoi_write.smp_apic_timer_interrupt.apic_timer_interrupt
0.83 ± 7% -0.4 0.44 ± 58% perf-profile.calltrace.cycles-pp.account_user_time.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
3.40 ± 3% -0.3 3.12 ± 2% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
3.00 ± 4% -0.3 2.71 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.44 ± 2% -0.2 1.29 ± 10% perf-profile.calltrace.cycles-pp.update_curr.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.77 ± 3% -0.1 0.62 ± 12% perf-profile.calltrace.cycles-pp.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
545.00 ± 25% -41.9% 316.75 ± 47% interrupts.36:IR-PCI-MSI.1572867-edge.eth0-TxRx-2
203.50 ± 9% +45.3% 295.75 ± 24% interrupts.39:IR-PCI-MSI.1572870-edge.eth0-TxRx-5
38166429 -11.0% 33961585 interrupts.CAL:Function_call_interrupts
432615 -10.9% 385659 interrupts.CPU0.CAL:Function_call_interrupts
434222 -11.2% 385591 interrupts.CPU1.CAL:Function_call_interrupts
432253 ± 2% -10.6% 386505 interrupts.CPU10.CAL:Function_call_interrupts
432492 ± 2% -11.0% 384996 interrupts.CPU11.CAL:Function_call_interrupts
450925 ± 2% -7.4% 417361 ± 2% interrupts.CPU11.TLB:TLB_shootdowns
431152 -10.4% 386198 interrupts.CPU12.CAL:Function_call_interrupts
7893 -37.4% 4939 ± 34% interrupts.CPU12.NMI:Non-maskable_interrupts
7893 -37.4% 4939 ± 34% interrupts.CPU12.PMI:Performance_monitoring_interrupts
431989 ± 2% -10.8% 385478 interrupts.CPU13.CAL:Function_call_interrupts
432141 -10.8% 385372 interrupts.CPU14.CAL:Function_call_interrupts
430309 -10.2% 386216 interrupts.CPU15.CAL:Function_call_interrupts
545.00 ± 25% -41.9% 316.75 ± 47% interrupts.CPU16.36:IR-PCI-MSI.1572867-edge.eth0-TxRx-2
432466 -10.4% 387465 interrupts.CPU16.CAL:Function_call_interrupts
432582 -11.0% 384925 interrupts.CPU17.CAL:Function_call_interrupts
434853 -11.0% 386855 interrupts.CPU18.CAL:Function_call_interrupts
203.50 ± 9% +45.3% 295.75 ± 24% interrupts.CPU19.39:IR-PCI-MSI.1572870-edge.eth0-TxRx-5
432706 -10.6% 386950 interrupts.CPU19.CAL:Function_call_interrupts
434387 -12.4% 380661 interrupts.CPU2.CAL:Function_call_interrupts
453229 -9.2% 411516 interrupts.CPU2.TLB:TLB_shootdowns
435346 -11.3% 386104 interrupts.CPU20.CAL:Function_call_interrupts
454122 -7.9% 418229 ± 2% interrupts.CPU20.TLB:TLB_shootdowns
434493 -10.9% 387125 interrupts.CPU21.CAL:Function_call_interrupts
621.00 ± 11% +64.5% 1021 ± 20% interrupts.CPU21.RES:Rescheduling_interrupts
425823 ± 3% -9.4% 385897 interrupts.CPU22.CAL:Function_call_interrupts
433468 -11.0% 385821 interrupts.CPU23.CAL:Function_call_interrupts
434913 -11.2% 386097 interrupts.CPU24.CAL:Function_call_interrupts
434374 -11.2% 385692 interrupts.CPU25.CAL:Function_call_interrupts
433527 -10.6% 387596 interrupts.CPU26.CAL:Function_call_interrupts
434308 -11.3% 385115 interrupts.CPU27.CAL:Function_call_interrupts
431181 -10.9% 384326 interrupts.CPU28.CAL:Function_call_interrupts
434448 -11.5% 384490 interrupts.CPU29.CAL:Function_call_interrupts
453474 ± 2% -8.0% 417056 interrupts.CPU29.TLB:TLB_shootdowns
433938 -11.1% 385867 interrupts.CPU3.CAL:Function_call_interrupts
2739 ± 14% -41.8% 1594 ± 27% interrupts.CPU3.RES:Rescheduling_interrupts
433773 -11.0% 386014 interrupts.CPU30.CAL:Function_call_interrupts
434250 -11.1% 385917 interrupts.CPU31.CAL:Function_call_interrupts
436173 -11.6% 385778 interrupts.CPU32.CAL:Function_call_interrupts
455271 ± 2% -8.0% 418794 ± 2% interrupts.CPU32.TLB:TLB_shootdowns
435787 -11.4% 386189 interrupts.CPU33.CAL:Function_call_interrupts
436438 -11.5% 386321 interrupts.CPU34.CAL:Function_call_interrupts
433917 -11.4% 384653 interrupts.CPU35.CAL:Function_call_interrupts
434362 -10.8% 387306 interrupts.CPU36.CAL:Function_call_interrupts
434686 -11.3% 385416 interrupts.CPU37.CAL:Function_call_interrupts
453983 ± 2% -7.8% 418472 interrupts.CPU37.TLB:TLB_shootdowns
433245 -10.7% 386871 interrupts.CPU38.CAL:Function_call_interrupts
430705 -10.0% 387585 interrupts.CPU39.CAL:Function_call_interrupts
434411 -11.2% 385547 interrupts.CPU4.CAL:Function_call_interrupts
1251 ± 9% +90.3% 2380 ± 32% interrupts.CPU4.RES:Rescheduling_interrupts
434631 -11.1% 386349 interrupts.CPU40.CAL:Function_call_interrupts
436453 -11.6% 385886 interrupts.CPU41.CAL:Function_call_interrupts
436514 -11.5% 386455 interrupts.CPU42.CAL:Function_call_interrupts
432629 ± 2% -11.0% 385234 interrupts.CPU43.CAL:Function_call_interrupts
1982 ± 40% -39.4% 1201 ± 44% interrupts.CPU43.RES:Rescheduling_interrupts
438403 -11.0% 390257 interrupts.CPU44.CAL:Function_call_interrupts
437673 -10.6% 391240 interrupts.CPU45.CAL:Function_call_interrupts
436956 -10.0% 393070 interrupts.CPU46.CAL:Function_call_interrupts
436535 ± 2% -10.3% 391586 interrupts.CPU47.CAL:Function_call_interrupts
436404 ± 2% -10.3% 391331 interrupts.CPU48.CAL:Function_call_interrupts
440924 -11.0% 392351 interrupts.CPU49.CAL:Function_call_interrupts
455336 ± 2% -8.0% 418988 ± 2% interrupts.CPU49.TLB:TLB_shootdowns
429597 -10.2% 385817 interrupts.CPU5.CAL:Function_call_interrupts
2657 ± 16% -57.7% 1125 ± 38% interrupts.CPU5.RES:Rescheduling_interrupts
432290 -10.8% 385504 interrupts.CPU50.CAL:Function_call_interrupts
430785 ± 2% -10.2% 386960 interrupts.CPU51.CAL:Function_call_interrupts
433428 ± 2% -11.0% 385627 interrupts.CPU52.CAL:Function_call_interrupts
452795 ± 2% -7.6% 418273 interrupts.CPU52.TLB:TLB_shootdowns
432275 -10.8% 385758 interrupts.CPU53.CAL:Function_call_interrupts
432401 -10.8% 385721 interrupts.CPU54.CAL:Function_call_interrupts
433685 -11.2% 384984 interrupts.CPU55.CAL:Function_call_interrupts
434907 -11.3% 385865 interrupts.CPU56.CAL:Function_call_interrupts
454111 ± 2% -7.7% 419097 ± 2% interrupts.CPU56.TLB:TLB_shootdowns
433831 ± 2% -11.6% 383393 interrupts.CPU57.CAL:Function_call_interrupts
452988 ± 2% -8.1% 416171 interrupts.CPU57.TLB:TLB_shootdowns
434640 -11.0% 386694 interrupts.CPU58.CAL:Function_call_interrupts
433779 -11.1% 385790 ± 2% interrupts.CPU59.CAL:Function_call_interrupts
433594 -11.3% 384456 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
433902 -11.5% 383949 interrupts.CPU60.CAL:Function_call_interrupts
453607 -8.1% 416661 interrupts.CPU60.TLB:TLB_shootdowns
432735 -10.7% 386644 interrupts.CPU61.CAL:Function_call_interrupts
430100 ± 2% -10.3% 385648 interrupts.CPU62.CAL:Function_call_interrupts
1361 ± 17% -46.0% 734.75 ± 21% interrupts.CPU62.RES:Rescheduling_interrupts
430135 ± 2% -10.4% 385243 interrupts.CPU63.CAL:Function_call_interrupts
433506 -11.4% 384298 interrupts.CPU64.CAL:Function_call_interrupts
428442 -10.6% 383084 interrupts.CPU65.CAL:Function_call_interrupts
934.00 +41.7% 1323 ± 6% interrupts.CPU65.RES:Rescheduling_interrupts
429851 -10.2% 385861 ± 2% interrupts.CPU66.CAL:Function_call_interrupts
432662 -10.6% 386768 interrupts.CPU67.CAL:Function_call_interrupts
1137 ± 30% -37.3% 713.25 ± 22% interrupts.CPU67.RES:Rescheduling_interrupts
435998 -11.2% 387345 interrupts.CPU68.CAL:Function_call_interrupts
433910 -11.1% 385547 ± 2% interrupts.CPU69.CAL:Function_call_interrupts
433665 -11.1% 385376 interrupts.CPU7.CAL:Function_call_interrupts
433964 -11.2% 385200 interrupts.CPU70.CAL:Function_call_interrupts
434280 -11.2% 385800 interrupts.CPU71.CAL:Function_call_interrupts
436755 -12.1% 383884 interrupts.CPU72.CAL:Function_call_interrupts
434534 -11.1% 386259 interrupts.CPU73.CAL:Function_call_interrupts
435838 -11.3% 386437 interrupts.CPU74.CAL:Function_call_interrupts
849.50 +22.9% 1044 ± 15% interrupts.CPU74.RES:Rescheduling_interrupts
431545 -11.2% 383340 interrupts.CPU75.CAL:Function_call_interrupts
435028 -11.6% 384709 interrupts.CPU76.CAL:Function_call_interrupts
434623 -11.7% 383907 interrupts.CPU77.CAL:Function_call_interrupts
432569 -11.0% 384848 interrupts.CPU78.CAL:Function_call_interrupts
433842 -12.4% 379948 ± 3% interrupts.CPU79.CAL:Function_call_interrupts
453571 -9.3% 411502 ± 3% interrupts.CPU79.TLB:TLB_shootdowns
434907 -11.6% 384337 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
435116 -11.4% 385431 interrupts.CPU80.CAL:Function_call_interrupts
677.50 ± 25% -24.1% 514.00 ± 28% interrupts.CPU80.RES:Rescheduling_interrupts
432192 -11.0% 384794 interrupts.CPU81.CAL:Function_call_interrupts
433320 -10.7% 387090 interrupts.CPU82.CAL:Function_call_interrupts
1174 ± 9% -50.1% 586.50 ± 29% interrupts.CPU82.RES:Rescheduling_interrupts
435824 -11.9% 384143 interrupts.CPU83.CAL:Function_call_interrupts
434477 -11.0% 386568 interrupts.CPU84.CAL:Function_call_interrupts
432705 -10.4% 387511 interrupts.CPU85.CAL:Function_call_interrupts
434758 ± 2% -11.5% 384778 interrupts.CPU86.CAL:Function_call_interrupts
454777 ± 2% -7.9% 418646 interrupts.CPU86.TLB:TLB_shootdowns
432224 -11.3% 383338 interrupts.CPU87.CAL:Function_call_interrupts
432930 -11.2% 384537 interrupts.CPU9.CAL:Function_call_interrupts
***************************************************************************************************
lkp-bdw-ex1: 192 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 512G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ex1/all_utime/reaim/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
%stddev %change %stddev
\ | \
0.08 -10.2% 0.07 reaim.child_systime
753802 -11.9% 663774 reaim.jobs_per_min
3926 -11.9% 3457 reaim.jobs_per_min_child
97.65 -3.8% 93.97 reaim.jti
806247 -6.3% 755738 reaim.max_jobs_per_min
1.56 +13.6% 1.78 reaim.parent_time
1.88 ± 3% +194.4% 5.53 ± 4% reaim.std_dev_percent
0.03 ± 3% +217.2% 0.09 ± 4% reaim.std_dev_time
104138 +86.0% 193748 reaim.time.involuntary_context_switches
1074884 +2.8% 1105377 reaim.time.minor_page_faults
7721 -6.3% 7236 reaim.time.percent_of_cpu_this_job_got
23369 -6.5% 21847 reaim.time.user_time
66048 -1.7% 64924 reaim.time.voluntary_context_switches
1612800 -5.7% 1521600 reaim.workload
1.448e+08 ± 4% +75.9% 2.548e+08 ± 27% cpuidle.POLL.time
449918 ± 4% -25.3% 336278 ± 15% numa-numastat.node1.local_node
462355 ± 2% -23.2% 354953 ± 14% numa-numastat.node1.numa_hit
59.00 +5.1% 62.00 vmstat.cpu.id
40.00 -7.5% 37.00 vmstat.cpu.us
2033 +18.8% 2415 vmstat.system.cs
1092 -5.8% 1029 turbostat.Avg_MHz
12.57 +17.7% 14.79 turbostat.CPU%c1
2.17 ± 5% -26.9% 1.59 ± 13% turbostat.Pkg%pc6
20.95 +7.4% 22.50 ± 4% turbostat.RAMWatt
22388 ± 34% -62.8% 8322 ± 85% numa-vmstat.node1.nr_active_anon
6845 ± 15% -38.8% 4189 ± 10% numa-vmstat.node1.nr_slab_reclaimable
17978 ± 3% -26.0% 13303 ± 10% numa-vmstat.node1.nr_slab_unreclaimable
22388 ± 34% -62.8% 8322 ± 85% numa-vmstat.node1.nr_zone_active_anon
2740 ± 62% -95.1% 135.50 ± 77% numa-vmstat.node3.nr_inactive_anon
2921 ± 59% -91.1% 259.00 ± 31% numa-vmstat.node3.nr_shmem
2740 ± 62% -95.1% 135.50 ± 77% numa-vmstat.node3.nr_zone_inactive_anon
101194 ± 30% -53.7% 46870 ± 62% numa-meminfo.node1.Active
89611 ± 34% -62.9% 33266 ± 85% numa-meminfo.node1.Active(anon)
566915 ± 3% -18.3% 463403 ± 8% numa-meminfo.node1.MemUsed
27379 ± 15% -38.8% 16759 ± 10% numa-meminfo.node1.SReclaimable
71912 ± 3% -26.0% 53214 ± 10% numa-meminfo.node1.SUnreclaim
99292 ± 5% -29.5% 69973 ± 8% numa-meminfo.node1.Slab
10962 ± 62% -95.0% 543.25 ± 77% numa-meminfo.node3.Inactive(anon)
11687 ± 59% -91.1% 1037 ± 31% numa-meminfo.node3.Shmem
77999 -3.5% 75239 proc-vmstat.nr_slab_unreclaimable
38308 ± 13% +242.0% 131006 ± 2% proc-vmstat.numa_hint_faults
780.75 ±100% +571.6% 5243 ± 75% proc-vmstat.numa_hint_faults_local
1910995 +2.1% 1950408 proc-vmstat.numa_hit
1854894 +2.1% 1894321 proc-vmstat.numa_local
40180 ± 21% +204.9% 122514 ± 4% proc-vmstat.numa_pages_migrated
145766 ± 44% +152.1% 367445 proc-vmstat.numa_pte_updates
2082332 +1.5% 2113859 proc-vmstat.pgalloc_normal
2106944 +1.5% 2139589 proc-vmstat.pgfault
2018610 +1.6% 2050051 proc-vmstat.pgfree
40180 ± 21% +204.9% 122514 ± 4% proc-vmstat.pgmigrate_success
9772 ± 7% +48.0% 14465 ± 12% softirqs.CPU0.SCHED
62268 +13.2% 70466 softirqs.CPU0.TIMER
14263 ± 6% +19.6% 17059 ± 7% softirqs.CPU148.RCU
53614 +10.2% 59094 ± 6% softirqs.CPU148.TIMER
15228 ± 3% +19.8% 18246 ± 14% softirqs.CPU37.RCU
13686 ± 9% +19.4% 16346 ± 7% softirqs.CPU53.RCU
14747 ± 9% +15.4% 17023 ± 7% softirqs.CPU54.RCU
14975 ± 4% +20.3% 18017 ± 18% softirqs.CPU55.RCU
14654 ± 4% +12.7% 16510 ± 9% softirqs.CPU59.RCU
14608 ± 4% +9.0% 15927 ± 8% softirqs.CPU60.RCU
14141 +13.6% 16067 ± 7% softirqs.CPU61.RCU
13788 ± 3% +28.1% 17667 ± 19% softirqs.CPU63.RCU
14460 ± 7% -16.3% 12105 ± 9% softirqs.CPU96.RCU
3871 ± 3% -23.7% 2953 ± 7% slabinfo.biovec-64.active_objs
3871 ± 3% -23.7% 2953 ± 7% slabinfo.biovec-64.num_objs
4456 ± 4% -66.5% 1490 ± 3% slabinfo.buffer_head.active_objs
4456 ± 4% -66.5% 1490 ± 3% slabinfo.buffer_head.num_objs
9327 ± 7% -49.8% 4681 ± 20% slabinfo.eventpoll_epi.active_objs
9327 ± 7% -49.8% 4681 ± 20% slabinfo.eventpoll_epi.num_objs
8161 ± 7% -49.8% 4096 ± 20% slabinfo.eventpoll_pwq.active_objs
8161 ± 7% -49.8% 4096 ± 20% slabinfo.eventpoll_pwq.num_objs
870.00 ± 10% -22.0% 678.25 ± 5% slabinfo.file_lock_cache.active_objs
870.00 ± 10% -22.0% 678.25 ± 5% slabinfo.file_lock_cache.num_objs
11826 -12.3% 10377 ± 4% slabinfo.shmem_inode_cache.active_objs
11826 -12.3% 10377 ± 4% slabinfo.shmem_inode_cache.num_objs
7919 ± 2% -16.7% 6596 ± 2% slabinfo.sighand_cache.active_objs
8073 ± 2% -18.0% 6616 ± 2% slabinfo.sighand_cache.num_objs
13382 -20.2% 10683 slabinfo.signal_cache.active_objs
13441 -20.4% 10695 slabinfo.signal_cache.num_objs
9837 +26.3% 12423 slabinfo.sigqueue.active_objs
9837 +26.3% 12423 slabinfo.sigqueue.num_objs
6269 ± 3% -20.0% 5015 ± 6% slabinfo.sock_inode_cache.active_objs
6269 ± 3% -20.0% 5015 ± 6% slabinfo.sock_inode_cache.num_objs
16.88 ± 23% -8.9 7.94 ± 20% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
16.79 ± 23% -8.9 7.88 ± 20% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
11.45 ± 34% -7.5 3.90 ± 28% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
11.06 ± 35% -7.4 3.63 ± 29% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
9.53 ± 42% -6.6 2.90 ± 22% perf-profile.calltrace.cycles-pp.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
9.49 ± 42% -6.6 2.88 ± 22% perf-profile.calltrace.cycles-pp._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt
8.59 ± 41% -6.2 2.44 ± 29% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter
6.58 ± 4% -1.5 5.11 ± 11% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
6.38 ± 4% -1.4 4.93 ± 11% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
5.84 ± 4% -1.4 4.48 ± 12% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.18 ± 6% -1.0 3.16 ± 12% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
2.96 ± 3% -0.8 2.14 ± 12% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
2.70 ± 4% -0.8 1.91 ± 13% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
3.17 ± 3% -0.8 2.40 ± 14% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
2.67 ± 4% -0.8 1.90 ± 13% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.98 ± 7% -0.8 1.20 ± 13% perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
2.38 ± 5% -0.7 1.65 ± 13% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.97 ± 4% -0.5 1.49 ± 12% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.59 ± 7% -0.5 1.13 ± 17% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
1.39 ± 5% -0.4 0.97 ± 17% perf-profile.calltrace.cycles-pp.__tick_nohz_idle_enter.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.68 ± 8% -0.4 0.29 ±100% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.19 ± 6% -0.4 0.81 ± 18% perf-profile.calltrace.cycles-pp.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.82 ± 6% -0.2 0.62 ± 11% perf-profile.calltrace.cycles-pp.cpuidle_enter_state
86.66 +3.0 89.68 perf-profile.calltrace.cycles-pp.secondary_startup_64
86.09 +3.1 89.20 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
86.09 +3.1 89.20 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
86.09 +3.1 89.20 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
84.14 +3.5 87.62 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
9.63 ± 15% +5.5 15.11 ± 20% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
56.81 ± 6% +7.1 63.92 ± 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.00 ± 22% +4.8e+24% 71.37 ±105% sched_debug.cfs_rq:/.MIN_vruntime.stddev
612.74 ± 12% +597.4% 4272 ± 9% sched_debug.cfs_rq:/.exec_clock.stddev
433198 ± 28% -50.6% 214019 ± 58% sched_debug.cfs_rq:/.load.max
0.00 ± 22% +4.8e+24% 71.37 ±105% sched_debug.cfs_rq:/.max_vruntime.stddev
72913 ± 11% +344.0% 323724 ± 12% sched_debug.cfs_rq:/.min_vruntime.stddev
370.32 ± 23% -45.8% 200.85 ± 64% sched_debug.cfs_rq:/.runnable_load_avg.max
432768 ± 28% -52.6% 204920 ± 65% sched_debug.cfs_rq:/.runnable_weight.max
70393 ±132% -862.4% -536655 sched_debug.cfs_rq:/.spread0.avg
333239 ± 42% -70.2% 99452 ± 81% sched_debug.cfs_rq:/.spread0.max
-139349 +878.6% -1363651 sched_debug.cfs_rq:/.spread0.min
72941 ± 11% +344.6% 324292 ± 12% sched_debug.cfs_rq:/.spread0.stddev
276.49 ± 35% +61.9% 447.55 ± 28% sched_debug.cfs_rq:/.util_avg.avg
109.18 ± 13% +77.2% 193.51 ± 19% sched_debug.cfs_rq:/.util_avg.stddev
24.69 ± 30% -43.1% 14.05 ± 18% sched_debug.cpu.clock.stddev
24.69 ± 30% -43.1% 14.05 ± 18% sched_debug.cpu.clock_task.stddev
304.28 ± 18% -42.8% 173.94 ± 59% sched_debug.cpu.cpu_load[3].max
22.83 ± 18% -40.2% 13.66 ± 54% sched_debug.cpu.cpu_load[3].stddev
2.88 ± 9% +14.8% 3.30 ± 7% sched_debug.cpu.cpu_load[4].avg
231.47 ± 14% -35.1% 150.19 ± 45% sched_debug.cpu.cpu_load[4].max
17.23 ± 14% -31.6% 11.79 ± 43% sched_debug.cpu.cpu_load[4].stddev
433198 ± 28% -50.6% 214019 ± 58% sched_debug.cpu.load.max
37141 ± 29% -47.0% 19685 ± 56% sched_debug.cpu.load.stddev
0.00 ± 29% -26.6% 0.00 ± 16% sched_debug.cpu.next_balance.stddev
1243 ± 7% +228.3% 4082 ± 12% sched_debug.cpu.nr_load_updates.stddev
0.15 ± 21% +37.7% 0.20 ± 31% sched_debug.cpu.nr_running.stddev
926.31 ± 7% -15.7% 780.48 ± 4% sched_debug.cpu.nr_switches.min
1739 ± 8% +24.6% 2167 ± 8% sched_debug.cpu.nr_switches.stddev
-6.12 +51.0% -9.25 sched_debug.cpu.nr_uninterruptible.min
3408 ± 12% +120.8% 7525 ± 20% sched_debug.cpu.sched_goidle.max
187.60 ± 10% -23.7% 143.10 ± 10% sched_debug.cpu.sched_goidle.min
501.83 ± 2% +49.4% 749.50 ± 9% sched_debug.cpu.sched_goidle.stddev
4105 ± 11% +145.9% 10094 ± 12% sched_debug.cpu.ttwu_count.max
591.57 ± 5% +66.5% 985.13 ± 11% sched_debug.cpu.ttwu_count.stddev
177.60 ± 11% +60.5% 285.13 ± 7% sched_debug.cpu.ttwu_local.stddev
0.02 ± 27% +33.0% 0.03 ± 12% sched_debug.rt_rq:/.rt_time.max
0.00 ± 20% +51.7% 0.00 ± 17% sched_debug.rt_rq:/.rt_time.stddev
6.249e+09 -4.9% 5.943e+09 perf-stat.i.branch-instructions
733533 ± 4% -12.9% 638917 ± 3% perf-stat.i.cache-misses
2001 +19.5% 2392 perf-stat.i.context-switches
2.108e+11 -5.9% 1.983e+11 perf-stat.i.cpu-cycles
124.23 +51.2% 187.86 perf-stat.i.cpu-migrations
404084 ± 2% -17.4% 333693 ± 3% perf-stat.i.cycles-between-cache-misses
4.2e+10 -5.1% 3.984e+10 perf-stat.i.dTLB-loads
0.06 ± 3% -0.0 0.06 ± 2% perf-stat.i.dTLB-store-miss-rate%
787597 -2.2% 770036 perf-stat.i.dTLB-store-misses
2.75e+10 -5.2% 2.606e+10 perf-stat.i.dTLB-stores
83.86 -4.9 78.94 perf-stat.i.iTLB-load-miss-rate%
989375 +1.9% 1008403 perf-stat.i.iTLB-load-misses
153254 ± 6% +31.7% 201890 ± 2% perf-stat.i.iTLB-loads
1.251e+11 -5.3% 1.184e+11 perf-stat.i.instructions
406474 ± 2% -13.6% 351010 perf-stat.i.instructions-per-iTLB-miss
6821 +1.9% 6953 perf-stat.i.minor-faults
279978 -9.6% 253077 ± 2% perf-stat.i.node-load-misses
6824 +1.9% 6954 perf-stat.i.page-faults
0.63 ± 2% +7.1% 0.68 perf-stat.overall.MPKI
0.26 +0.0 0.28 ± 3% perf-stat.overall.branch-miss-rate%
0.94 ± 6% -0.1 0.80 ± 3% perf-stat.overall.cache-miss-rate%
284356 ± 4% +8.2% 307605 ± 3% perf-stat.overall.cycles-between-cache-misses
0.01 +0.0 0.01 perf-stat.overall.dTLB-load-miss-rate%
0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate%
86.63 -3.2 83.38 perf-stat.overall.iTLB-load-miss-rate%
124752 -7.0% 115987 perf-stat.overall.instructions-per-iTLB-miss
6.168e+09 -4.8% 5.869e+09 perf-stat.ps.branch-instructions
734409 ± 4% -13.1% 638302 ± 2% perf-stat.ps.cache-misses
1996 +19.2% 2380 perf-stat.ps.context-switches
2.084e+11 -5.9% 1.962e+11 perf-stat.ps.cpu-cycles
123.49 +50.6% 185.98 perf-stat.ps.cpu-migrations
4.154e+10 -5.1% 3.942e+10 perf-stat.ps.dTLB-loads
789367 -2.3% 771264 perf-stat.ps.dTLB-store-misses
2.72e+10 -5.2% 2.579e+10 perf-stat.ps.dTLB-stores
990459 +1.9% 1008890 perf-stat.ps.iTLB-load-misses
152951 ± 6% +31.5% 201083 ± 2% perf-stat.ps.iTLB-loads
1.236e+11 -5.3% 1.17e+11 perf-stat.ps.instructions
6818 +1.9% 6949 perf-stat.ps.minor-faults
280027 -9.8% 252575 ± 2% perf-stat.ps.node-load-misses
6819 +1.9% 6949 perf-stat.ps.page-faults
3.747e+13 -5.6% 3.536e+13 perf-stat.total.instructions
156.25 ± 4% +253.4% 552.25 ± 78% interrupts.132:PCI-MSI.1574914-edge.eth3-TxRx-2
2522 ± 24% +31.5% 3317 ± 3% interrupts.CPU0.NMI:Non-maskable_interrupts
2522 ± 24% +31.5% 3317 ± 3% interrupts.CPU0.PMI:Performance_monitoring_interrupts
9419 ± 4% +114.1% 20162 ± 15% interrupts.CPU0.RES:Rescheduling_interrupts
2454 ± 23% +27.4% 3126 ± 5% interrupts.CPU10.NMI:Non-maskable_interrupts
2454 ± 23% +27.4% 3126 ± 5% interrupts.CPU10.PMI:Performance_monitoring_interrupts
2438 ± 23% +44.2% 3516 ± 22% interrupts.CPU11.NMI:Non-maskable_interrupts
2438 ± 23% +44.2% 3516 ± 22% interrupts.CPU11.PMI:Performance_monitoring_interrupts
2807 ± 2% +11.9% 3141 ± 4% interrupts.CPU119.NMI:Non-maskable_interrupts
2807 ± 2% +11.9% 3141 ± 4% interrupts.CPU119.PMI:Performance_monitoring_interrupts
2800 ± 2% +12.0% 3135 ± 5% interrupts.CPU12.NMI:Non-maskable_interrupts
2800 ± 2% +12.0% 3135 ± 5% interrupts.CPU12.PMI:Performance_monitoring_interrupts
2800 +13.9% 3190 ± 6% interrupts.CPU120.NMI:Non-maskable_interrupts
2800 +13.9% 3190 ± 6% interrupts.CPU120.PMI:Performance_monitoring_interrupts
62.25 ± 47% +101.6% 125.50 ± 39% interrupts.CPU121.RES:Rescheduling_interrupts
2800 ± 2% +12.2% 3143 ± 5% interrupts.CPU13.NMI:Non-maskable_interrupts
2800 ± 2% +12.2% 3143 ± 5% interrupts.CPU13.PMI:Performance_monitoring_interrupts
2770 +14.5% 3171 ± 6% interrupts.CPU131.NMI:Non-maskable_interrupts
2770 +14.5% 3171 ± 6% interrupts.CPU131.PMI:Performance_monitoring_interrupts
63.00 ± 46% +174.2% 172.75 ± 33% interrupts.CPU133.RES:Rescheduling_interrupts
73.75 ± 27% +1000.7% 811.75 ±142% interrupts.CPU134.RES:Rescheduling_interrupts
118.25 ± 22% +35.7% 160.50 ± 16% interrupts.CPU137.RES:Rescheduling_interrupts
108.50 ± 35% +241.0% 370.00 ± 44% interrupts.CPU141.RES:Rescheduling_interrupts
2791 ± 2% +22.5% 3420 ± 5% interrupts.CPU145.NMI:Non-maskable_interrupts
2791 ± 2% +22.5% 3420 ± 5% interrupts.CPU145.PMI:Performance_monitoring_interrupts
2776 ± 2% +14.8% 3188 ± 4% interrupts.CPU15.NMI:Non-maskable_interrupts
2776 ± 2% +14.8% 3188 ± 4% interrupts.CPU15.PMI:Performance_monitoring_interrupts
2813 ± 2% +15.5% 3248 ± 4% interrupts.CPU16.NMI:Non-maskable_interrupts
2813 ± 2% +15.5% 3248 ± 4% interrupts.CPU16.PMI:Performance_monitoring_interrupts
124.50 ± 40% +134.5% 292.00 ± 38% interrupts.CPU16.RES:Rescheduling_interrupts
2801 ± 2% +11.4% 3119 ± 3% interrupts.CPU17.NMI:Non-maskable_interrupts
2801 ± 2% +11.4% 3119 ± 3% interrupts.CPU17.PMI:Performance_monitoring_interrupts
2802 ± 2% +17.4% 3288 ± 9% interrupts.CPU183.NMI:Non-maskable_interrupts
2802 ± 2% +17.4% 3288 ± 9% interrupts.CPU183.PMI:Performance_monitoring_interrupts
44.00 ± 31% +383.5% 212.75 ± 77% interrupts.CPU183.RES:Rescheduling_interrupts
2809 +17.5% 3301 ± 10% interrupts.CPU189.NMI:Non-maskable_interrupts
2809 +17.5% 3301 ± 10% interrupts.CPU189.PMI:Performance_monitoring_interrupts
2848 ± 3% +11.6% 3179 interrupts.CPU19.NMI:Non-maskable_interrupts
2848 ± 3% +11.6% 3179 interrupts.CPU19.PMI:Performance_monitoring_interrupts
82.00 ± 45% +145.7% 201.50 ± 53% interrupts.CPU190.RES:Rescheduling_interrupts
2797 ± 2% +25.5% 3509 ± 8% interrupts.CPU191.NMI:Non-maskable_interrupts
2797 ± 2% +25.5% 3509 ± 8% interrupts.CPU191.PMI:Performance_monitoring_interrupts
156.25 ± 4% +253.4% 552.25 ± 78% interrupts.CPU2.132:PCI-MSI.1574914-edge.eth3-TxRx-2
668.50 ± 16% +290.0% 2607 ± 31% interrupts.CPU2.RES:Rescheduling_interrupts
2852 +14.2% 3256 ± 7% interrupts.CPU23.NMI:Non-maskable_interrupts
2852 +14.2% 3256 ± 7% interrupts.CPU23.PMI:Performance_monitoring_interrupts
190.50 ± 18% +169.8% 514.00 ± 91% interrupts.CPU26.RES:Rescheduling_interrupts
570.25 ± 19% +109.6% 1195 ± 15% interrupts.CPU3.RES:Rescheduling_interrupts
87.25 ± 29% +769.6% 758.75 ±119% interrupts.CPU35.RES:Rescheduling_interrupts
96.00 ± 27% +182.0% 270.75 ± 33% interrupts.CPU38.RES:Rescheduling_interrupts
1619 ± 96% -90.1% 160.00 ±101% interrupts.CPU49.RES:Rescheduling_interrupts
157.75 ± 25% -38.5% 97.00 ± 29% interrupts.CPU59.RES:Rescheduling_interrupts
140.50 ± 33% -46.6% 75.00 ± 67% interrupts.CPU62.RES:Rescheduling_interrupts
152.25 ± 39% +160.8% 397.00 ± 34% interrupts.CPU72.RES:Rescheduling_interrupts
176.25 ± 29% -50.1% 88.00 ± 33% interrupts.CPU86.RES:Rescheduling_interrupts
60.50 ± 47% +114.0% 129.50 ± 35% interrupts.CPU87.RES:Rescheduling_interrupts
2799 ± 2% +23.8% 3466 ± 7% interrupts.CPU89.NMI:Non-maskable_interrupts
2799 ± 2% +23.8% 3466 ± 7% interrupts.CPU89.PMI:Performance_monitoring_interrupts
2813 ± 3% +17.3% 3300 ± 8% interrupts.CPU95.NMI:Non-maskable_interrupts
2813 ± 3% +17.3% 3300 ± 8% interrupts.CPU95.PMI:Performance_monitoring_interrupts
2834 +14.9% 3256 ± 3% interrupts.CPU96.NMI:Non-maskable_interrupts
2834 +14.9% 3256 ± 3% interrupts.CPU96.PMI:Performance_monitoring_interrupts
51108 +36.3% 69684 ± 2% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep3: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.2/process/1600%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep3/hackbench/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=sched_slice/0x
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule/0x
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
%stddev %change %stddev
\ | \
494809 -7.3% 458896 hackbench.throughput
612.95 +1.8% 624.20 hackbench.time.elapsed_time
612.95 +1.8% 624.20 hackbench.time.elapsed_time.max
4.886e+08 +88.4% 9.206e+08 ± 2% hackbench.time.involuntary_context_switches
48035824 -4.2% 46000152 hackbench.time.minor_page_faults
7176 +1.1% 7255 hackbench.time.percent_of_cpu_this_job_got
36530 +3.8% 37917 hackbench.time.system_time
7459 -1.2% 7372 hackbench.time.user_time
8.862e+08 +62.7% 1.442e+09 hackbench.time.voluntary_context_switches
2.534e+09 -4.2% 2.429e+09 hackbench.workload
2794 ± 8% -9.4% 2530 ± 2% boot-time.idle
62869 ± 8% -27.5% 45585 ± 26% numa-meminfo.node1.SReclaimable
15712 ± 8% -27.4% 11404 ± 26% numa-vmstat.node1.nr_slab_reclaimable
66.75 +2.6% 68.50 vmstat.cpu.sy
2236124 +69.2% 3782640 vmstat.system.cs
151493 +64.8% 249683 vmstat.system.in
25739582 +28.3% 33017943 ± 17% cpuidle.C1.time
2600683 +89.6% 4931044 ± 21% cpuidle.C1.usage
1.177e+08 -23.5% 90006774 ± 3% cpuidle.C3.time
400081 -26.3% 294861 cpuidle.C3.usage
28889782 ± 37% +210.6% 89736479 ± 39% cpuidle.POLL.time
32231 ± 2% +139.5% 77205 ± 21% cpuidle.POLL.usage
2274 +1.7% 2314 turbostat.Avg_MHz
2598175 +89.7% 4928921 ± 21% turbostat.C1
399873 -26.3% 294659 turbostat.C3
0.22 -0.1 0.16 ± 2% turbostat.C3%
5.81 ± 2% -10.8% 5.18 turbostat.CPU%c1
0.12 -33.3% 0.08 ± 15% turbostat.CPU%c3
93461424 +67.4% 1.564e+08 ± 2% turbostat.IRQ
2.65 ± 6% +52.8% 4.05 ± 8% turbostat.Pkg%pc2
1541845 +14.3% 1762636 meminfo.Active
1480497 +14.9% 1701349 meminfo.Active(anon)
1440429 ± 2% +11.9% 1612079 meminfo.AnonPages
37373539 ± 2% +16.5% 43542569 meminfo.Committed_AS
18778 +11.2% 20883 ± 3% meminfo.Inactive(anon)
510694 ± 2% +14.5% 584945 meminfo.KernelStack
6655144 +12.7% 7497435 meminfo.Memused
1212773 +16.3% 1410085 meminfo.PageTables
1202917 +12.2% 1349397 meminfo.SUnreclaim
54584 ± 12% +64.5% 89814 ± 10% meminfo.Shmem
1318488 +11.2% 1465692 meminfo.Slab
55493 ± 2% -28.9% 39468 ± 2% meminfo.max_used_kB
369842 ± 2% +14.7% 424031 proc-vmstat.nr_active_anon
359829 ± 2% +11.7% 401857 proc-vmstat.nr_anon_pages
1491001 -1.4% 1470310 proc-vmstat.nr_dirty_background_threshold
2985649 -1.4% 2944217 proc-vmstat.nr_dirty_threshold
234629 +3.7% 243333 proc-vmstat.nr_file_pages
14800700 -1.4% 14593509 proc-vmstat.nr_free_pages
4698 +10.9% 5209 ± 3% proc-vmstat.nr_inactive_anon
510506 ± 2% +14.3% 583465 proc-vmstat.nr_kernel_stack
6228 +2.9% 6408 proc-vmstat.nr_mapped
303038 ± 2% +15.9% 351178 proc-vmstat.nr_page_table_pages
13683 ± 13% +63.7% 22404 ± 11% proc-vmstat.nr_shmem
301079 ± 2% +12.1% 337558 proc-vmstat.nr_slab_unreclaimable
369842 ± 2% +14.7% 424031 proc-vmstat.nr_zone_active_anon
4698 +10.9% 5209 ± 3% proc-vmstat.nr_zone_inactive_anon
403.75 ±123% +1057.7% 4674 ±122% proc-vmstat.numa_hint_faults
47.25 ± 81% +4297.4% 2077 ±112% proc-vmstat.numa_hint_faults_local
6.147e+08 -7.3% 5.701e+08 proc-vmstat.numa_hit
6.147e+08 -7.3% 5.701e+08 proc-vmstat.numa_local
11583 ± 16% +71.6% 19875 ± 14% proc-vmstat.pgactivate
6.224e+08 -7.2% 5.774e+08 proc-vmstat.pgalloc_normal
48581147 -3.7% 46773973 proc-vmstat.pgfault
6.221e+08 -7.2% 5.773e+08 proc-vmstat.pgfree
400811 ± 7% -48.8% 205202 ± 51% sched_debug.cfs_rq:/.load.max
63923 ± 21% -56.9% 27565 ± 49% sched_debug.cfs_rq:/.load.stddev
473.56 ± 9% -34.1% 311.94 ± 18% sched_debug.cfs_rq:/.load_avg.max
70.01 ± 13% -36.1% 44.72 ± 20% sched_debug.cfs_rq:/.load_avg.stddev
24050559 +66.1% 39942558 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg
27186152 +150.9% 68197622 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
22417469 -26.5% 16476834 ± 5% sched_debug.cfs_rq:/.min_vruntime.min
934692 ± 9% +2357.1% 22966409 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.42 ± 4% -28.3% 0.30 ± 9% sched_debug.cfs_rq:/.nr_running.stddev
10.48 ± 24% -37.7% 6.53 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.avg
343.08 ± 13% -66.5% 114.81 ± 78% sched_debug.cfs_rq:/.runnable_load_avg.max
43.75 ± 18% -64.0% 15.76 ± 57% sched_debug.cfs_rq:/.runnable_load_avg.stddev
399576 ± 7% -49.9% 200210 ± 54% sched_debug.cfs_rq:/.runnable_weight.max
63786 ± 21% -57.9% 26851 ± 52% sched_debug.cfs_rq:/.runnable_weight.stddev
3479430 ± 23% +1081.6% 41111920 ± 35% sched_debug.cfs_rq:/.spread0.max
934513 ± 9% +2358.3% 22973201 ± 6% sched_debug.cfs_rq:/.spread0.stddev
329009 ± 9% +75.8% 578522 ± 9% sched_debug.cpu.avg_idle.avg
100.19 ± 17% +339.0% 439.87 ± 71% sched_debug.cpu.clock.stddev
100.19 ± 17% +339.0% 439.87 ± 71% sched_debug.cpu.clock_task.stddev
255.50 ± 31% -43.7% 143.92 ± 61% sched_debug.cpu.cpu_load[0].max
31.10 ± 29% -40.5% 18.51 ± 48% sched_debug.cpu.cpu_load[0].stddev
239.47 ± 25% -49.9% 119.92 ± 31% sched_debug.cpu.cpu_load[2].max
29.44 ± 24% -46.2% 15.84 ± 24% sched_debug.cpu.cpu_load[2].stddev
225.33 ± 23% -50.8% 110.81 ± 30% sched_debug.cpu.cpu_load[3].max
27.43 ± 23% -45.6% 14.91 ± 22% sched_debug.cpu.cpu_load[3].stddev
210.61 ± 21% -45.9% 114.03 ± 30% sched_debug.cpu.cpu_load[4].max
25.26 ± 21% -40.7% 14.98 ± 20% sched_debug.cpu.cpu_load[4].stddev
22185 ± 8% -20.7% 17584 ± 12% sched_debug.cpu.curr->pid.stddev
401289 ± 7% -38.3% 247619 ± 32% sched_debug.cpu.load.max
63993 ± 21% -51.8% 30820 ± 28% sched_debug.cpu.load.stddev
0.00 ± 25% +314.4% 0.00 ± 65% sched_debug.cpu.next_balance.stddev
2161 ± 6% +73.4% 3748 ± 3% sched_debug.cpu.nr_load_updates.stddev
1.80 ± 44% +1219.7% 23.78 ± 53% sched_debug.cpu.nr_running.avg
18.56 ± 32% +447.0% 101.50 ± 63% sched_debug.cpu.nr_running.max
3.54 ± 40% +673.8% 27.43 ± 58% sched_debug.cpu.nr_running.stddev
7777523 +69.6% 13189331 sched_debug.cpu.nr_switches.avg
8892283 +187.0% 25524847 sched_debug.cpu.nr_switches.max
6863956 ± 2% -34.3% 4508011 ± 4% sched_debug.cpu.nr_switches.min
399499 ± 6% +1992.9% 8360950 ± 2% sched_debug.cpu.nr_switches.stddev
0.25 ± 52% +8572.4% 21.46 ± 62% sched_debug.cpu.nr_uninterruptible.avg
889.83 ± 4% +243.9% 3060 ± 4% sched_debug.cpu.nr_uninterruptible.max
-802.22 +323.8% -3399 sched_debug.cpu.nr_uninterruptible.min
348.15 ± 3% +506.4% 2111 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
584593 ± 2% +14.5% 669643 slabinfo.anon_vma.active_objs
13084 ± 2% +14.5% 14982 slabinfo.anon_vma.active_slabs
601913 ± 2% +14.5% 689224 slabinfo.anon_vma.num_objs
13084 ± 2% +14.5% 14982 slabinfo.anon_vma.num_slabs
63008 +10.7% 69740 slabinfo.cred_jar.active_objs
1505 +10.9% 1669 slabinfo.cred_jar.active_slabs
63231 +10.9% 70129 slabinfo.cred_jar.num_objs
1505 +10.9% 1669 slabinfo.cred_jar.num_slabs
2494 ± 7% -20.4% 1985 ± 17% slabinfo.eventpoll_epi.active_objs
2494 ± 7% -20.4% 1985 ± 17% slabinfo.eventpoll_epi.num_objs
4365 ± 7% -20.4% 3474 ± 17% slabinfo.eventpoll_pwq.active_objs
4365 ± 7% -20.4% 3474 ± 17% slabinfo.eventpoll_pwq.num_objs
777.00 ± 8% -22.1% 605.00 ± 11% slabinfo.file_lock_cache.active_objs
777.00 ± 8% -22.1% 605.00 ± 11% slabinfo.file_lock_cache.num_objs
4667 ± 5% -9.5% 4226 ± 2% slabinfo.kmalloc-128.active_objs
4667 ± 5% -9.5% 4226 ± 2% slabinfo.kmalloc-128.num_objs
63873 ± 2% +10.2% 70416 ± 3% slabinfo.kmalloc-96.active_objs
36770 ± 2% +13.3% 41645 slabinfo.mm_struct.active_objs
2448 ± 2% +13.3% 2774 slabinfo.mm_struct.active_slabs
39182 ± 2% +13.3% 44397 slabinfo.mm_struct.num_objs
2448 ± 2% +13.3% 2774 slabinfo.mm_struct.num_slabs
1239866 ± 2% +15.4% 1430445 slabinfo.pid.active_objs
23099 ± 2% +16.9% 27003 slabinfo.pid.active_slabs
1478380 ± 2% +16.9% 1728215 slabinfo.pid.num_objs
23099 ± 2% +16.9% 27003 slabinfo.pid.num_slabs
347835 -12.6% 304030 ± 2% slabinfo.selinux_file_security.active_objs
1364 -12.7% 1191 ± 3% slabinfo.selinux_file_security.active_slabs
349266 -12.6% 305089 ± 3% slabinfo.selinux_file_security.num_objs
1364 -12.7% 1191 ± 3% slabinfo.selinux_file_security.num_slabs
41509 +16.7% 48454 slabinfo.sighand_cache.active_objs
2772 +17.0% 3243 slabinfo.sighand_cache.active_slabs
41590 +17.0% 48658 slabinfo.sighand_cache.num_objs
2772 +17.0% 3243 slabinfo.sighand_cache.num_slabs
44791 ± 2% +15.0% 51511 slabinfo.signal_cache.active_objs
1498 ± 2% +15.3% 1726 slabinfo.signal_cache.active_slabs
44952 ± 2% +15.3% 51811 slabinfo.signal_cache.num_objs
1498 ± 2% +15.3% 1726 slabinfo.signal_cache.num_slabs
39448 ± 2% +17.6% 46374 slabinfo.task_struct.active_objs
13155 ± 2% +17.7% 15484 slabinfo.task_struct.active_slabs
39467 ± 2% +17.7% 46452 slabinfo.task_struct.num_objs
13155 ± 2% +17.7% 15484 slabinfo.task_struct.num_slabs
886246 ± 2% +16.0% 1027892 slabinfo.vm_area_struct.active_objs
22632 ± 2% +15.6% 26163 slabinfo.vm_area_struct.active_slabs
905325 ± 2% +15.6% 1046529 slabinfo.vm_area_struct.num_objs
22632 ± 2% +15.6% 26163 slabinfo.vm_area_struct.num_slabs
36.79 ± 6% -36.8 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_write
29.71 ± 8% -29.7 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_read
24.00 ± 6% -24.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write
23.48 ± 6% -23.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
22.75 ± 6% -22.8 0.00 perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
21.26 ± 6% -21.3 0.00 perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
16.73 ± 8% -16.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read
16.22 ± 8% -16.2 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
15.47 ± 8% -15.5 0.00 perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
14.40 ± 8% -14.4 0.00 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
3.88 ± 3% -1.7 2.15 ± 6% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.__vfs_write.vfs_write.sys_write
0.65 ± 11% +0.3 0.97 ± 28% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
0.68 ± 11% +0.4 1.05 ± 7% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.22 ± 7% +0.4 2.65 ± 9% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.__vfs_read
1.10 ± 8% +0.5 1.55 ± 11% perf-profile.calltrace.cycles-pp.touch_atime.pipe_read.__vfs_read.vfs_read.sys_read
0.42 ± 59% +0.6 0.97 ± 25% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_read
0.28 ±101% +0.6 0.86 ± 25% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
1.18 ± 10% +0.6 1.79 ± 36% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
1.37 ± 11% +0.6 2.00 ± 31% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.00 +0.6 0.64 ± 7% perf-profile.calltrace.cycles-pp.__lock_text_start.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
1.47 ± 11% +0.7 2.17 ± 30% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.30 ±101% +0.7 0.99 ± 29% perf-profile.calltrace.cycles-pp.__switch_to
0.00 +0.7 0.73 ± 20% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +0.7 0.73 ± 20% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
0.00 +0.8 0.78 ± 10% perf-profile.calltrace.cycles-pp.__indirect_thunk_start
0.00 +1.0 0.96 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_stage2
1.28 ± 15% +1.0 2.28 ± 26% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.29 ± 15% +1.0 2.29 ± 26% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.32 ± 15% +1.0 2.35 ± 26% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.1 1.13 ± 8% perf-profile.calltrace.cycles-pp.__fdget_pos.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.1 1.15 ± 11% perf-profile.calltrace.cycles-pp.__fdget_pos.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.71 ± 9% +1.5 5.17 ± 35% perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
3.80 ± 9% +1.5 5.28 ± 35% perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
4.22 ± 9% +1.7 5.91 ± 35% perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.__vfs_read.vfs_read.sys_read
2.54 ± 18% +1.8 4.29 ± 34% perf-profile.calltrace.cycles-pp.idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
4.11 ± 9% +2.2 6.30 ± 32% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
14.37 +2.4 16.75 ± 5% perf-profile.calltrace.cycles-pp.pipe_read.__vfs_read.vfs_read.sys_read.do_syscall_64
3.42 ± 17% +2.5 5.88 ± 31% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
15.21 +2.5 17.68 ± 4% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.46 ± 8% +2.5 11.97 ± 28% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write.sys_write
7.24 ± 11% +2.8 10.03 ± 33% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
7.63 ± 10% +2.8 10.43 ± 33% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
7.29 ± 10% +2.8 10.12 ± 33% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write
1.45 ± 29% +8.8 10.29 ± 14% perf-profile.calltrace.cycles-pp._entry_trampoline
1.54 ± 27% +9.0 10.56 ± 14% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
6.32 ± 18% +17.4 23.75 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.63 ± 19% +18.7 25.34 perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.10 ± 17% +20.3 30.45 ± 3% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.36 ± 17% +21.7 32.02 ± 2% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.46 ± 17% +42.8 62.30 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.69 ± 17% +44.0 63.67 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
17.10 -21.3% 13.46 ± 2% perf-stat.i.MPKI
4.035e+10 -17.5% 3.33e+10 ± 3% perf-stat.i.branch-instructions
2.14 -0.2 1.95 perf-stat.i.branch-miss-rate%
7.194e+08 -18.6% 5.855e+08 ± 3% perf-stat.i.branch-misses
12.70 -1.4 11.33 perf-stat.i.cache-miss-rate%
1.014e+08 -11.3% 89967376 ± 3% perf-stat.i.cache-misses
3519724 +39.3% 4901677 ± 3% perf-stat.i.context-switches
2.06 ± 2% -6.7% 1.92 perf-stat.i.cpi
3.139e+11 -16.5% 2.622e+11 ± 3% perf-stat.i.cpu-cycles
113205 ± 2% +72.7% 195554 ± 5% perf-stat.i.cpu-migrations
0.47 +0.1 0.53 ± 3% perf-stat.i.dTLB-load-miss-rate%
1.716e+08 +16.6% 2.001e+08 ± 5% perf-stat.i.dTLB-load-misses
6.393e+10 -18.1% 5.234e+10 ± 3% perf-stat.i.dTLB-loads
8147741 +33.5% 10877323 ± 3% perf-stat.i.dTLB-store-misses
4.046e+10 -18.6% 3.294e+10 ± 3% perf-stat.i.dTLB-stores
55.06 +1.1 56.14 perf-stat.i.iTLB-load-miss-rate%
2.161e+08 -8.0% 1.989e+08 ± 3% perf-stat.i.iTLB-load-misses
1.927e+08 -14.7% 1.643e+08 ± 4% perf-stat.i.iTLB-loads
2.102e+11 -17.7% 1.731e+11 ± 3% perf-stat.i.instructions
1069 ± 2% -8.4% 979.54 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.56 +3.8% 0.58 perf-stat.i.ipc
124005 -22.0% 96666 ± 3% perf-stat.i.minor-faults
277317 -17.9% 227661 ± 3% perf-stat.i.msec
63.61 -8.3 55.35 ± 2% perf-stat.i.node-load-miss-rate%
14378837 ± 2% -5.7% 13566343 ± 3% perf-stat.i.node-loads
21.74 ± 2% -4.3 17.40 perf-stat.i.node-store-miss-rate%
5331671 -16.6% 4445492 ± 3% perf-stat.i.node-store-misses
124005 -22.0% 96666 ± 3% perf-stat.i.page-faults
3.76 +23.4% 4.64 perf-stat.overall.MPKI
1.78 -0.0 1.76 perf-stat.overall.branch-miss-rate%
12.84 -1.6 11.22 perf-stat.overall.cache-miss-rate%
1.49 +1.5% 1.52 perf-stat.overall.cpi
0.27 +0.1 0.38 ± 5% perf-stat.overall.dTLB-load-miss-rate%
0.02 +0.0 0.03 perf-stat.overall.dTLB-store-miss-rate%
52.87 +1.9 54.77 perf-stat.overall.iTLB-load-miss-rate%
972.65 -10.6% 869.88 perf-stat.overall.instructions-per-iTLB-miss
0.67 -1.4% 0.66 perf-stat.overall.ipc
55.35 +1.8 57.18 perf-stat.overall.node-load-miss-rate%
23.25 -2.9 20.38 perf-stat.overall.node-store-miss-rate%
32464 +6.1% 34453 perf-stat.overall.path-length
1.579e+13 +2.0% 1.61e+13 perf-stat.total.branch-instructions
3.969e+10 +9.6% 4.352e+10 perf-stat.total.cache-misses
3.09e+11 +25.5% 3.879e+11 perf-stat.total.cache-references
1.378e+09 +72.1% 2.371e+09 perf-stat.total.context-switches
1.229e+14 +3.2% 1.268e+14 perf-stat.total.cpu-cycles
44320452 ± 2% +113.8% 94752867 ± 7% perf-stat.total.cpu-migrations
6.719e+10 +44.1% 9.685e+10 ± 5% perf-stat.total.dTLB-load-misses
2.502e+13 +1.1% 2.531e+13 perf-stat.total.dTLB-loads
3.189e+09 +65.0% 5.262e+09 perf-stat.total.dTLB-store-misses
8.459e+10 +13.7% 9.621e+10 perf-stat.total.iTLB-load-misses
7.542e+10 +5.3% 7.945e+10 perf-stat.total.iTLB-loads
8.228e+13 +1.7% 8.368e+13 perf-stat.total.instructions
48538538 -3.7% 46749483 perf-stat.total.minor-faults
1.086e+08 +1.4% 1.101e+08 perf-stat.total.msec
6.976e+09 +25.7% 8.768e+09 ± 3% perf-stat.total.node-load-misses
5.627e+09 +16.6% 6.562e+09 perf-stat.total.node-loads
2.087e+09 +3.0% 2.15e+09 perf-stat.total.node-store-misses
6.891e+09 +21.9% 8.4e+09 perf-stat.total.node-stores
48538482 -3.7% 46749452 perf-stat.total.page-faults
44380 ± 9% +36.4% 60536 ± 3% softirqs.CPU0.RCU
42700 +26.8% 54157 softirqs.CPU0.SCHED
41219 ± 3% +51.8% 62583 ± 3% softirqs.CPU1.RCU
40291 ± 2% +53.7% 61915 ± 2% softirqs.CPU10.RCU
39712 ± 2% +60.0% 63530 ± 2% softirqs.CPU11.RCU
39626 +58.3% 62721 softirqs.CPU12.RCU
39746 ± 3% +56.8% 62327 softirqs.CPU13.RCU
40013 ± 2% +62.5% 65013 ± 4% softirqs.CPU14.RCU
44176 ± 8% +49.0% 65821 ± 3% softirqs.CPU15.RCU
43495 ± 8% +50.4% 65412 ± 2% softirqs.CPU16.RCU
41320 +53.6% 63484 softirqs.CPU17.RCU
41735 +60.7% 67060 ± 7% softirqs.CPU18.RCU
44197 ± 9% +51.0% 66754 ± 2% softirqs.CPU19.RCU
40193 ± 3% +52.0% 61104 ± 2% softirqs.CPU2.RCU
44564 ± 10% +52.5% 67952 ± 6% softirqs.CPU20.RCU
42527 +55.1% 65966 ± 3% softirqs.CPU21.RCU
42153 +58.6% 66857 ± 5% softirqs.CPU22.RCU
42327 ± 3% +68.9% 71495 ± 5% softirqs.CPU23.RCU
42306 ± 2% +59.1% 67318 ± 9% softirqs.CPU24.RCU
41475 ± 2% +66.9% 69214 ± 14% softirqs.CPU25.RCU
41655 ± 2% +60.5% 66866 ± 3% softirqs.CPU26.RCU
41766 ± 2% +51.7% 63373 softirqs.CPU27.RCU
41548 ± 2% +60.0% 66490 ± 5% softirqs.CPU28.RCU
44279 ± 6% +50.4% 66601 ± 9% softirqs.CPU29.RCU
42001 ± 8% +51.4% 63605 ± 5% softirqs.CPU3.RCU
40890 ± 6% +50.8% 61645 ± 2% softirqs.CPU30.RCU
40491 ± 2% +53.8% 62280 ± 3% softirqs.CPU31.RCU
41471 ± 2% +56.5% 64894 ± 4% softirqs.CPU32.RCU
40349 ± 2% +62.0% 65360 ± 5% softirqs.CPU33.RCU
40479 ± 2% +62.5% 65781 ± 3% softirqs.CPU34.RCU
42319 ± 10% +48.6% 62876 ± 3% softirqs.CPU35.RCU
41799 +55.1% 64841 ± 3% softirqs.CPU36.RCU
43731 ± 8% +54.5% 67558 ± 3% softirqs.CPU37.RCU
41031 ± 3% +73.3% 71116 ± 8% softirqs.CPU38.RCU
42978 ± 6% +53.1% 65803 ± 7% softirqs.CPU39.RCU
39754 ± 2% +55.0% 61636 ± 3% softirqs.CPU4.RCU
46367 ± 9% +41.4% 65558 ± 9% softirqs.CPU40.RCU
40343 ± 2% +59.9% 64513 ± 3% softirqs.CPU41.RCU
42227 ± 3% +55.1% 65511 ± 4% softirqs.CPU42.RCU
40727 ± 3% +65.6% 67457 ± 3% softirqs.CPU43.RCU
40253 ± 4% +45.8% 58700 ± 2% softirqs.CPU44.RCU
40121 ± 2% +52.7% 61278 ± 2% softirqs.CPU45.RCU
40516 ± 3% +51.1% 61219 ± 2% softirqs.CPU46.RCU
41106 ± 10% +48.5% 61035 ± 6% softirqs.CPU47.RCU
39554 +58.7% 62780 ± 6% softirqs.CPU48.RCU
38899 +53.9% 59879 ± 2% softirqs.CPU49.RCU
39677 ± 2% +59.7% 63369 ± 6% softirqs.CPU5.RCU
42427 ± 9% +47.9% 62754 softirqs.CPU50.RCU
41751 ± 7% +47.8% 61711 ± 3% softirqs.CPU51.RCU
39992 +65.4% 66128 ± 7% softirqs.CPU52.RCU
39026 +54.8% 60393 ± 3% softirqs.CPU53.RCU
42161 ± 10% +47.6% 62218 ± 7% softirqs.CPU54.RCU
40080 ± 3% +60.2% 64206 ± 6% softirqs.CPU55.RCU
39427 ± 3% +57.5% 62094 softirqs.CPU56.RCU
39673 ± 3% +65.7% 65735 ± 7% softirqs.CPU57.RCU
40865 +53.4% 62669 ± 3% softirqs.CPU58.RCU
40346 +55.4% 62713 ± 3% softirqs.CPU59.RCU
39668 ± 2% +58.5% 62879 ± 2% softirqs.CPU6.RCU
39930 ± 2% +68.1% 67122 ± 5% softirqs.CPU60.RCU
40045 ± 2% +55.1% 62093 ± 2% softirqs.CPU61.RCU
40057 +57.6% 63145 ± 5% softirqs.CPU62.RCU
40579 +57.4% 63889 ± 7% softirqs.CPU63.RCU
40947 ± 2% +51.6% 62075 ± 2% softirqs.CPU64.RCU
42989 ± 8% +45.6% 62592 softirqs.CPU65.RCU
44393 ± 8% +53.9% 68301 ± 6% softirqs.CPU66.RCU
41370 ± 3% +57.9% 65307 ± 5% softirqs.CPU67.RCU
41532 +59.4% 66200 ± 2% softirqs.CPU68.RCU
40604 ± 3% +58.8% 64470 ± 6% softirqs.CPU69.RCU
39822 +53.2% 61021 softirqs.CPU7.RCU
40977 ± 2% +58.1% 64805 ± 6% softirqs.CPU70.RCU
42698 ± 7% +52.4% 65068 ± 6% softirqs.CPU71.RCU
41420 +70.9% 70802 ± 3% softirqs.CPU72.RCU
41458 +52.1% 63040 ± 2% softirqs.CPU73.RCU
41010 ± 2% +64.8% 67576 ± 7% softirqs.CPU74.RCU
38997 ± 9% +58.5% 61793 ± 5% softirqs.CPU75.RCU
37004 ± 3% +55.5% 57533 ± 2% softirqs.CPU76.RCU
37324 ± 2% +61.2% 60153 ± 6% softirqs.CPU77.RCU
40007 ± 8% +45.9% 58373 ± 3% softirqs.CPU78.RCU
37264 ± 4% +52.2% 56698 ± 3% softirqs.CPU79.RCU
39957 ± 2% +54.1% 61588 ± 2% softirqs.CPU8.RCU
38811 ± 2% +51.8% 58931 ± 2% softirqs.CPU80.RCU
40186 ± 6% +53.6% 61730 ± 6% softirqs.CPU81.RCU
37439 ± 2% +77.3% 66389 ± 6% softirqs.CPU82.RCU
37454 ± 3% +62.9% 61016 ± 8% softirqs.CPU83.RCU
39574 ± 7% +49.7% 59255 ± 3% softirqs.CPU84.RCU
36603 ± 2% +77.5% 64978 ± 7% softirqs.CPU85.RCU
40354 ± 8% +46.8% 59245 ± 5% softirqs.CPU86.RCU
37744 ± 3% +65.6% 62495 ± 5% softirqs.CPU87.RCU
39090 +58.6% 62000 softirqs.CPU9.RCU
3594895 +56.1% 5612323 softirqs.RCU
500558 +17.5% 588306 ± 2% softirqs.SCHED
555.75 ± 24% -31.4% 381.50 ± 16% interrupts.34:PCI-MSI.3145731-edge.eth0-TxRx-2
352056 +19.6% 421007 ± 2% interrupts.CAL:Function_call_interrupts
3999 ± 3% +18.8% 4752 ± 6% interrupts.CPU0.CAL:Function_call_interrupts
4038 +19.2% 4812 ± 2% interrupts.CPU1.CAL:Function_call_interrupts
3898 ± 7% +23.6% 4816 ± 3% interrupts.CPU10.CAL:Function_call_interrupts
2771 ± 32% +160.5% 7220 ± 17% interrupts.CPU10.NMI:Non-maskable_interrupts
2771 ± 32% +160.5% 7220 ± 17% interrupts.CPU10.PMI:Performance_monitoring_interrupts
3957 ± 4% +17.7% 4657 ± 7% interrupts.CPU11.CAL:Function_call_interrupts
2930 ± 28% +145.8% 7204 ± 18% interrupts.CPU11.NMI:Non-maskable_interrupts
2930 ± 28% +145.8% 7204 ± 18% interrupts.CPU11.PMI:Performance_monitoring_interrupts
3996 +21.3% 4848 ± 2% interrupts.CPU12.CAL:Function_call_interrupts
2777 ± 32% +159.5% 7207 ± 18% interrupts.CPU12.NMI:Non-maskable_interrupts
2777 ± 32% +159.5% 7207 ± 18% interrupts.CPU12.PMI:Performance_monitoring_interrupts
555.75 ± 24% -31.4% 381.50 ± 16% interrupts.CPU13.34:PCI-MSI.3145731-edge.eth0-TxRx-2
4059 +18.7% 4819 ± 3% interrupts.CPU13.CAL:Function_call_interrupts
2781 ± 32% +159.2% 7207 ± 18% interrupts.CPU13.NMI:Non-maskable_interrupts
2781 ± 32% +159.2% 7207 ± 18% interrupts.CPU13.PMI:Performance_monitoring_interrupts
3994 ± 2% +21.0% 4834 ± 3% interrupts.CPU14.CAL:Function_call_interrupts
2773 ± 32% +168.6% 7448 ± 12% interrupts.CPU14.NMI:Non-maskable_interrupts
2773 ± 32% +168.6% 7448 ± 12% interrupts.CPU14.PMI:Performance_monitoring_interrupts
4047 +19.2% 4823 ± 3% interrupts.CPU15.CAL:Function_call_interrupts
2774 ± 32% +159.6% 7201 ± 18% interrupts.CPU15.NMI:Non-maskable_interrupts
2774 ± 32% +159.6% 7201 ± 18% interrupts.CPU15.PMI:Performance_monitoring_interrupts
4056 +19.4% 4844 ± 3% interrupts.CPU16.CAL:Function_call_interrupts
2781 ± 32% +158.8% 7198 ± 18% interrupts.CPU16.NMI:Non-maskable_interrupts
2781 ± 32% +158.8% 7198 ± 18% interrupts.CPU16.PMI:Performance_monitoring_interrupts
3921 ± 5% +23.2% 4829 ± 2% interrupts.CPU17.CAL:Function_call_interrupts
3303 ± 31% +118.2% 7209 ± 18% interrupts.CPU17.NMI:Non-maskable_interrupts
3303 ± 31% +118.2% 7209 ± 18% interrupts.CPU17.PMI:Performance_monitoring_interrupts
4048 +18.8% 4808 ± 3% interrupts.CPU18.CAL:Function_call_interrupts
3315 ± 31% +117.3% 7204 ± 18% interrupts.CPU18.NMI:Non-maskable_interrupts
3315 ± 31% +117.3% 7204 ± 18% interrupts.CPU18.PMI:Performance_monitoring_interrupts
4056 +18.4% 4803 ± 3% interrupts.CPU19.CAL:Function_call_interrupts
3316 ± 31% +117.2% 7204 ± 18% interrupts.CPU19.NMI:Non-maskable_interrupts
3316 ± 31% +117.2% 7204 ± 18% interrupts.CPU19.PMI:Performance_monitoring_interrupts
4021 ± 2% +20.5% 4844 ± 2% interrupts.CPU2.CAL:Function_call_interrupts
3307 ± 31% +117.9% 7204 ± 18% interrupts.CPU20.NMI:Non-maskable_interrupts
3307 ± 31% +117.9% 7204 ± 18% interrupts.CPU20.PMI:Performance_monitoring_interrupts
4039 +19.5% 4824 ± 3% interrupts.CPU21.CAL:Function_call_interrupts
3307 ± 31% +117.9% 7206 ± 18% interrupts.CPU21.NMI:Non-maskable_interrupts
3307 ± 31% +117.9% 7206 ± 18% interrupts.CPU21.PMI:Performance_monitoring_interrupts
4018 +17.6% 4725 ± 4% interrupts.CPU22.CAL:Function_call_interrupts
3306 ± 31% +117.1% 7179 ± 19% interrupts.CPU22.NMI:Non-maskable_interrupts
3306 ± 31% +117.1% 7179 ± 19% interrupts.CPU22.PMI:Performance_monitoring_interrupts
4060 +18.1% 4794 interrupts.CPU23.CAL:Function_call_interrupts
3853 ± 23% +86.0% 7169 ± 19% interrupts.CPU23.NMI:Non-maskable_interrupts
3853 ± 23% +86.0% 7169 ± 19% interrupts.CPU23.PMI:Performance_monitoring_interrupts
4009 ± 2% +20.7% 4838 ± 3% interrupts.CPU24.CAL:Function_call_interrupts
3840 ± 23% +86.7% 7172 ± 19% interrupts.CPU24.NMI:Non-maskable_interrupts
3840 ± 23% +86.7% 7172 ± 19% interrupts.CPU24.PMI:Performance_monitoring_interrupts
3886 ± 5% +22.9% 4775 ± 3% interrupts.CPU25.CAL:Function_call_interrupts
3874 ± 23% +85.2% 7175 ± 19% interrupts.CPU25.NMI:Non-maskable_interrupts
3874 ± 23% +85.2% 7175 ± 19% interrupts.CPU25.PMI:Performance_monitoring_interrupts
4008 +19.2% 4779 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU26.NMI:Non-maskable_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU26.PMI:Performance_monitoring_interrupts
4051 +19.6% 4846 ± 2% interrupts.CPU27.CAL:Function_call_interrupts
3307 ± 31% +116.9% 7175 ± 19% interrupts.CPU27.NMI:Non-maskable_interrupts
3307 ± 31% +116.9% 7175 ± 19% interrupts.CPU27.PMI:Performance_monitoring_interrupts
3885 ± 6% +23.4% 4793 interrupts.CPU28.CAL:Function_call_interrupts
3305 ± 31% +117.4% 7186 ± 18% interrupts.CPU28.NMI:Non-maskable_interrupts
3305 ± 31% +117.4% 7186 ± 18% interrupts.CPU28.PMI:Performance_monitoring_interrupts
3303 ± 31% +117.4% 7182 ± 19% interrupts.CPU29.NMI:Non-maskable_interrupts
3303 ± 31% +117.4% 7182 ± 19% interrupts.CPU29.PMI:Performance_monitoring_interrupts
4041 +19.6% 4831 ± 2% interrupts.CPU3.CAL:Function_call_interrupts
3753 ± 13% +30.1% 4882 ± 2% interrupts.CPU30.CAL:Function_call_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU30.NMI:Non-maskable_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU30.PMI:Performance_monitoring_interrupts
3954 +22.5% 4845 ± 2% interrupts.CPU31.CAL:Function_call_interrupts
3851 ± 24% +86.1% 7165 ± 19% interrupts.CPU31.NMI:Non-maskable_interrupts
3851 ± 24% +86.1% 7165 ± 19% interrupts.CPU31.PMI:Performance_monitoring_interrupts
4032 +19.4% 4814 ± 2% interrupts.CPU32.CAL:Function_call_interrupts
3898 ± 6% +23.8% 4825 ± 3% interrupts.CPU33.CAL:Function_call_interrupts
3919 ± 2% +20.4% 4718 ± 6% interrupts.CPU34.CAL:Function_call_interrupts
3863 ± 24% +85.7% 7173 ± 19% interrupts.CPU34.NMI:Non-maskable_interrupts
3863 ± 24% +85.7% 7173 ± 19% interrupts.CPU34.PMI:Performance_monitoring_interrupts
3993 +16.2% 4641 ± 5% interrupts.CPU35.CAL:Function_call_interrupts
3855 ± 24% +85.9% 7168 ± 19% interrupts.CPU35.NMI:Non-maskable_interrupts
3855 ± 24% +85.9% 7168 ± 19% interrupts.CPU35.PMI:Performance_monitoring_interrupts
3911 ± 2% +23.2% 4817 ± 2% interrupts.CPU36.CAL:Function_call_interrupts
3856 ± 24% +85.9% 7171 ± 19% interrupts.CPU36.NMI:Non-maskable_interrupts
3856 ± 24% +85.9% 7171 ± 19% interrupts.CPU36.PMI:Performance_monitoring_interrupts
3928 ± 4% +22.3% 4802 ± 2% interrupts.CPU37.CAL:Function_call_interrupts
4026 +15.0% 4629 ± 6% interrupts.CPU38.CAL:Function_call_interrupts
4034 +19.7% 4829 interrupts.CPU39.CAL:Function_call_interrupts
4036 +19.1% 4805 ± 3% interrupts.CPU4.CAL:Function_call_interrupts
4031 +20.5% 4856 ± 2% interrupts.CPU40.CAL:Function_call_interrupts
4035 +17.9% 4758 ± 5% interrupts.CPU41.CAL:Function_call_interrupts
3919 ± 4% +20.0% 4704 ± 6% interrupts.CPU42.CAL:Function_call_interrupts
3950 +19.2% 4710 ± 4% interrupts.CPU43.CAL:Function_call_interrupts
3931 ± 7% +22.5% 4816 ± 3% interrupts.CPU44.CAL:Function_call_interrupts
2836 ± 32% +119.7% 6232 ± 28% interrupts.CPU44.NMI:Non-maskable_interrupts
2836 ± 32% +119.7% 6232 ± 28% interrupts.CPU44.PMI:Performance_monitoring_interrupts
4018 +20.5% 4841 ± 2% interrupts.CPU45.CAL:Function_call_interrupts
2770 ± 32% +124.5% 6218 ± 28% interrupts.CPU45.NMI:Non-maskable_interrupts
2770 ± 32% +124.5% 6218 ± 28% interrupts.CPU45.PMI:Performance_monitoring_interrupts
4043 +19.1% 4817 ± 3% interrupts.CPU46.CAL:Function_call_interrupts
2778 ± 32% +102.1% 5616 ± 42% interrupts.CPU46.NMI:Non-maskable_interrupts
2778 ± 32% +102.1% 5616 ± 42% interrupts.CPU46.PMI:Performance_monitoring_interrupts
4064 +19.3% 4846 ± 3% interrupts.CPU47.CAL:Function_call_interrupts
2771 ± 32% +138.2% 6600 ± 35% interrupts.CPU47.NMI:Non-maskable_interrupts
2771 ± 32% +138.2% 6600 ± 35% interrupts.CPU47.PMI:Performance_monitoring_interrupts
4037 +19.5% 4825 ± 3% interrupts.CPU48.CAL:Function_call_interrupts
2774 ± 32% +102.6% 5622 ± 42% interrupts.CPU48.NMI:Non-maskable_interrupts
2774 ± 32% +102.6% 5622 ± 42% interrupts.CPU48.PMI:Performance_monitoring_interrupts
4054 +18.6% 4809 ± 3% interrupts.CPU49.CAL:Function_call_interrupts
2769 ± 32% +102.9% 5618 ± 42% interrupts.CPU49.NMI:Non-maskable_interrupts
2769 ± 32% +102.9% 5618 ± 42% interrupts.CPU49.PMI:Performance_monitoring_interrupts
4036 +15.9% 4680 ± 6% interrupts.CPU5.CAL:Function_call_interrupts
4056 +19.0% 4826 ± 3% interrupts.CPU50.CAL:Function_call_interrupts
2770 ± 32% +138.3% 6600 ± 35% interrupts.CPU50.NMI:Non-maskable_interrupts
2770 ± 32% +138.3% 6600 ± 35% interrupts.CPU50.PMI:Performance_monitoring_interrupts
4039 +19.6% 4832 ± 3% interrupts.CPU51.CAL:Function_call_interrupts
2774 ± 32% +137.9% 6602 ± 35% interrupts.CPU51.NMI:Non-maskable_interrupts
2774 ± 32% +137.9% 6602 ± 35% interrupts.CPU51.PMI:Performance_monitoring_interrupts
4059 +18.5% 4811 ± 3% interrupts.CPU52.CAL:Function_call_interrupts
2768 ± 32% +138.5% 6602 ± 35% interrupts.CPU52.NMI:Non-maskable_interrupts
2768 ± 32% +138.5% 6602 ± 35% interrupts.CPU52.PMI:Performance_monitoring_interrupts
4040 +18.2% 4775 ± 3% interrupts.CPU53.CAL:Function_call_interrupts
2768 ± 32% +138.4% 6599 ± 35% interrupts.CPU53.NMI:Non-maskable_interrupts
2768 ± 32% +138.4% 6599 ± 35% interrupts.CPU53.PMI:Performance_monitoring_interrupts
3982 ± 4% +21.5% 4839 ± 2% interrupts.CPU54.CAL:Function_call_interrupts
4036 ± 2% +16.0% 4681 ± 7% interrupts.CPU55.CAL:Function_call_interrupts
4002 ± 2% +17.5% 4701 ± 7% interrupts.CPU56.CAL:Function_call_interrupts
3319 ± 31% +117.0% 7201 ± 18% interrupts.CPU56.NMI:Non-maskable_interrupts
3319 ± 31% +117.0% 7201 ± 18% interrupts.CPU56.PMI:Performance_monitoring_interrupts
4049 +18.9% 4816 ± 4% interrupts.CPU57.CAL:Function_call_interrupts
3312 ± 31% +117.3% 7197 ± 18% interrupts.CPU57.NMI:Non-maskable_interrupts
3312 ± 31% +117.3% 7197 ± 18% interrupts.CPU57.PMI:Performance_monitoring_interrupts
3973 ± 3% +22.3% 4861 ± 2% interrupts.CPU58.CAL:Function_call_interrupts
3314 ± 31% +139.8% 7948 interrupts.CPU58.NMI:Non-maskable_interrupts
3314 ± 31% +139.8% 7948 interrupts.CPU58.PMI:Performance_monitoring_interrupts
4046 +20.2% 4861 ± 2% interrupts.CPU59.CAL:Function_call_interrupts
2779 ± 32% +158.8% 7191 ± 18% interrupts.CPU59.NMI:Non-maskable_interrupts
2779 ± 32% +158.8% 7191 ± 18% interrupts.CPU59.PMI:Performance_monitoring_interrupts
4030 +20.2% 4845 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
3315 ± 32% +117.2% 7199 ± 18% interrupts.CPU6.NMI:Non-maskable_interrupts
3315 ± 32% +117.2% 7199 ± 18% interrupts.CPU6.PMI:Performance_monitoring_interrupts
4047 +19.0% 4817 ± 3% interrupts.CPU60.CAL:Function_call_interrupts
2782 ± 32% +158.6% 7194 ± 18% interrupts.CPU60.NMI:Non-maskable_interrupts
2782 ± 32% +158.6% 7194 ± 18% interrupts.CPU60.PMI:Performance_monitoring_interrupts
3935 ± 5% +22.3% 4812 ± 3% interrupts.CPU61.CAL:Function_call_interrupts
2775 ± 32% +159.4% 7200 ± 18% interrupts.CPU61.NMI:Non-maskable_interrupts
2775 ± 32% +159.4% 7200 ± 18% interrupts.CPU61.PMI:Performance_monitoring_interrupts
4055 +19.9% 4861 ± 2% interrupts.CPU62.CAL:Function_call_interrupts
2776 ± 32% +159.3% 7198 ± 18% interrupts.CPU62.NMI:Non-maskable_interrupts
2776 ± 32% +159.3% 7198 ± 18% interrupts.CPU62.PMI:Performance_monitoring_interrupts
4055 +19.0% 4827 ± 3% interrupts.CPU63.CAL:Function_call_interrupts
2781 ± 32% +158.8% 7197 ± 18% interrupts.CPU63.NMI:Non-maskable_interrupts
2781 ± 32% +158.8% 7197 ± 18% interrupts.CPU63.PMI:Performance_monitoring_interrupts
4044 +16.9% 4729 ± 5% interrupts.CPU64.CAL:Function_call_interrupts
2770 ± 32% +159.7% 7195 ± 18% interrupts.CPU64.NMI:Non-maskable_interrupts
2770 ± 32% +159.7% 7195 ± 18% interrupts.CPU64.PMI:Performance_monitoring_interrupts
4040 +19.6% 4833 ± 3% interrupts.CPU65.CAL:Function_call_interrupts
2782 ± 32% +158.9% 7204 ± 18% interrupts.CPU65.NMI:Non-maskable_interrupts
2782 ± 32% +158.9% 7204 ± 18% interrupts.CPU65.PMI:Performance_monitoring_interrupts
4047 +15.9% 4691 ± 6% interrupts.CPU66.CAL:Function_call_interrupts
2778 ± 32% +158.0% 7168 ± 19% interrupts.CPU66.NMI:Non-maskable_interrupts
2778 ± 32% +158.0% 7168 ± 19% interrupts.CPU66.PMI:Performance_monitoring_interrupts
4069 +13.7% 4626 ± 4% interrupts.CPU67.CAL:Function_call_interrupts
2783 ± 31% +157.4% 7165 ± 19% interrupts.CPU67.NMI:Non-maskable_interrupts
2783 ± 31% +157.4% 7165 ± 19% interrupts.CPU67.PMI:Performance_monitoring_interrupts
4045 +19.9% 4851 ± 2% interrupts.CPU68.CAL:Function_call_interrupts
2779 ± 32% +157.8% 7166 ± 19% interrupts.CPU68.NMI:Non-maskable_interrupts
2779 ± 32% +157.8% 7166 ± 19% interrupts.CPU68.PMI:Performance_monitoring_interrupts
3897 ± 5% +23.9% 4828 ± 3% interrupts.CPU69.CAL:Function_call_interrupts
2784 ± 31% +157.6% 7171 ± 19% interrupts.CPU69.NMI:Non-maskable_interrupts
2784 ± 31% +157.6% 7171 ± 19% interrupts.CPU69.PMI:Performance_monitoring_interrupts
4045 +19.7% 4841 ± 2% interrupts.CPU7.CAL:Function_call_interrupts
2774 ± 32% +159.5% 7200 ± 18% interrupts.CPU7.NMI:Non-maskable_interrupts
2774 ± 32% +159.5% 7200 ± 18% interrupts.CPU7.PMI:Performance_monitoring_interrupts
4040 +17.1% 4730 ± 5% interrupts.CPU70.CAL:Function_call_interrupts
2784 ± 32% +157.7% 7174 ± 18% interrupts.CPU70.NMI:Non-maskable_interrupts
2784 ± 32% +157.7% 7174 ± 18% interrupts.CPU70.PMI:Performance_monitoring_interrupts
4051 +19.3% 4833 ± 2% interrupts.CPU71.CAL:Function_call_interrupts
2784 ± 32% +157.5% 7169 ± 19% interrupts.CPU71.NMI:Non-maskable_interrupts
2784 ± 32% +157.5% 7169 ± 19% interrupts.CPU71.PMI:Performance_monitoring_interrupts
2785 ± 33% +159.1% 7217 ± 17% interrupts.CPU72.NMI:Non-maskable_interrupts
2785 ± 33% +159.1% 7217 ± 17% interrupts.CPU72.PMI:Performance_monitoring_interrupts
3999 ± 3% +17.1% 4682 ± 5% interrupts.CPU73.CAL:Function_call_interrupts
2780 ± 32% +157.8% 7169 ± 19% interrupts.CPU73.NMI:Non-maskable_interrupts
2780 ± 32% +157.8% 7169 ± 19% interrupts.CPU73.PMI:Performance_monitoring_interrupts
3834 ± 8% +26.8% 4861 ± 2% interrupts.CPU74.CAL:Function_call_interrupts
2778 ± 32% +158.4% 7177 ± 19% interrupts.CPU74.NMI:Non-maskable_interrupts
2778 ± 32% +158.4% 7177 ± 19% interrupts.CPU74.PMI:Performance_monitoring_interrupts
4063 +19.1% 4840 ± 3% interrupts.CPU75.CAL:Function_call_interrupts
3331 ± 32% +115.3% 7171 ± 19% interrupts.CPU75.NMI:Non-maskable_interrupts
3331 ± 32% +115.3% 7171 ± 19% interrupts.CPU75.PMI:Performance_monitoring_interrupts
4021 +19.4% 4801 ± 3% interrupts.CPU76.CAL:Function_call_interrupts
3461 ± 26% +108.5% 7214 ± 18% interrupts.CPU76.NMI:Non-maskable_interrupts
3461 ± 26% +108.5% 7214 ± 18% interrupts.CPU76.PMI:Performance_monitoring_interrupts
3984 ± 3% +20.9% 4819 ± 3% interrupts.CPU77.CAL:Function_call_interrupts
3653 ± 39% +96.4% 7176 ± 19% interrupts.CPU77.NMI:Non-maskable_interrupts
3653 ± 39% +96.4% 7176 ± 19% interrupts.CPU77.PMI:Performance_monitoring_interrupts
4024 +18.9% 4786 ± 4% interrupts.CPU78.CAL:Function_call_interrupts
2782 ± 32% +158.1% 7181 ± 19% interrupts.CPU78.NMI:Non-maskable_interrupts
2782 ± 32% +158.1% 7181 ± 19% interrupts.CPU78.PMI:Performance_monitoring_interrupts
3897 ± 3% +22.8% 4787 ± 2% interrupts.CPU79.CAL:Function_call_interrupts
2784 ± 32% +157.7% 7174 ± 19% interrupts.CPU79.NMI:Non-maskable_interrupts
2784 ± 32% +157.7% 7174 ± 19% interrupts.CPU79.PMI:Performance_monitoring_interrupts
4024 ± 2% +19.5% 4808 ± 3% interrupts.CPU8.CAL:Function_call_interrupts
2770 ± 32% +159.9% 7201 ± 18% interrupts.CPU8.NMI:Non-maskable_interrupts
2770 ± 32% +159.9% 7201 ± 18% interrupts.CPU8.PMI:Performance_monitoring_interrupts
3710 ± 10% +30.5% 4840 ± 2% interrupts.CPU80.CAL:Function_call_interrupts
3323 ± 32% +115.8% 7172 ± 19% interrupts.CPU80.NMI:Non-maskable_interrupts
3323 ± 32% +115.8% 7172 ± 19% interrupts.CPU80.PMI:Performance_monitoring_interrupts
3997 +18.4% 4733 ± 2% interrupts.CPU81.CAL:Function_call_interrupts
3311 ± 32% +117.4% 7198 ± 18% interrupts.CPU81.NMI:Non-maskable_interrupts
3311 ± 32% +117.4% 7198 ± 18% interrupts.CPU81.PMI:Performance_monitoring_interrupts
4049 +17.6% 4763 ± 2% interrupts.CPU82.CAL:Function_call_interrupts
3319 ± 32% +116.1% 7172 ± 19% interrupts.CPU82.NMI:Non-maskable_interrupts
3319 ± 32% +116.1% 7172 ± 19% interrupts.CPU82.PMI:Performance_monitoring_interrupts
4013 +20.3% 4826 ± 2% interrupts.CPU83.CAL:Function_call_interrupts
3318 ± 32% +116.9% 7199 ± 18% interrupts.CPU83.NMI:Non-maskable_interrupts
3318 ± 32% +116.9% 7199 ± 18% interrupts.CPU83.PMI:Performance_monitoring_interrupts
4040 +19.6% 4834 ± 2% interrupts.CPU84.CAL:Function_call_interrupts
3345 ± 32% +114.5% 7175 ± 19% interrupts.CPU84.NMI:Non-maskable_interrupts
3345 ± 32% +114.5% 7175 ± 19% interrupts.CPU84.PMI:Performance_monitoring_interrupts
3991 ± 2% +19.8% 4781 ± 4% interrupts.CPU86.CAL:Function_call_interrupts
3350 ± 33% +113.7% 7162 ± 19% interrupts.CPU86.NMI:Non-maskable_interrupts
3350 ± 33% +113.7% 7162 ± 19% interrupts.CPU86.PMI:Performance_monitoring_interrupts
3326 ± 32% +117.0% 7216 ± 17% interrupts.CPU87.NMI:Non-maskable_interrupts
3326 ± 32% +117.0% 7216 ± 17% interrupts.CPU87.PMI:Performance_monitoring_interrupts
4058 +19.1% 4835 ± 3% interrupts.CPU9.CAL:Function_call_interrupts
2782 ± 32% +159.5% 7220 ± 18% interrupts.CPU9.NMI:Non-maskable_interrupts
2782 ± 32% +159.5% 7220 ± 18% interrupts.CPU9.PMI:Performance_monitoring_interrupts
51.00 ± 59% +150.0% 127.50 ± 23% interrupts.IWI:IRQ_work_interrupts
283310 ± 21% +114.7% 608405 ± 20% interrupts.NMI:Non-maskable_interrupts
283310 ± 21% +114.7% 608405 ± 20% interrupts.PMI:Performance_monitoring_interrupts
38196883 +161.0% 99690427 ± 2% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/custom/reaim/0x200004d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :4 kmsg.do_IRQ:#No_irq_handler_for_vector
:4 25% 1:4 dmesg.WARNING:at#for_ip_error_entry/0x
:4 25% 1:4 dmesg.WARNING:at#for_ip_ret_from_intr/0x
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
2:4 17% 2:4 perf-profile.calltrace.cycles-pp.error_entry
2:4 24% 3:4 perf-profile.children.cycles-pp.error_entry
%stddev %change %stddev
\ | \
4.77 +11.4% 5.32 ± 2% reaim.std_dev_percent
0.03 +8.2% 0.03 reaim.std_dev_time
2025938 -10.6% 1811037 reaim.time.involuntary_context_switches
8797312 +1.0% 8887670 reaim.time.voluntary_context_switches
9078 +12.0% 10169 meminfo.PageTables
2269 +11.6% 2531 proc-vmstat.nr_page_table_pages
41893 +6.7% 44683 ± 4% softirqs.CPU98.RCU
4.65 ± 2% -15.3% 3.94 ± 10% turbostat.Pkg%pc6
178.24 +2.0% 181.79 turbostat.PkgWatt
12123 ± 18% +33.1% 16134 ± 3% numa-meminfo.node0.Mapped
41009 ± 5% +12.4% 46109 ± 4% numa-meminfo.node0.SReclaimable
131166 ± 5% +4.8% 137483 ± 5% numa-meminfo.node0.SUnreclaim
172176 ± 2% +6.6% 183593 ± 3% numa-meminfo.node0.Slab
3014 ± 18% +34.4% 4052 ± 2% numa-vmstat.node0.nr_mapped
10257 ± 6% +12.4% 11530 ± 4% numa-vmstat.node0.nr_slab_reclaimable
32780 ± 5% +4.9% 34383 ± 5% numa-vmstat.node0.nr_slab_unreclaimable
3664 ± 16% -25.6% 2725 ± 2% numa-vmstat.node1.nr_mapped
1969 ± 4% +2.7% 2022 ± 5% slabinfo.kmalloc-4096.active_objs
5584 +9.8% 6130 ± 3% slabinfo.mm_struct.active_objs
5614 +9.7% 6158 ± 3% slabinfo.mm_struct.num_objs
877.00 +11.5% 978.25 ± 3% slabinfo.names_cache.active_objs
877.00 +11.5% 978.25 ± 3% slabinfo.names_cache.num_objs
1495 -9.8% 1348 ± 5% slabinfo.nsproxy.active_objs
1495 -9.8% 1348 ± 5% slabinfo.nsproxy.num_objs
1313 ± 15% +27.1% 1668 ± 2% sched_debug.cpu.nr_load_updates.stddev
2664 +44.4% 3847 ± 14% sched_debug.cpu.nr_switches.stddev
0.02 ± 77% -89.5% 0.00 ±223% sched_debug.cpu.nr_uninterruptible.avg
81.22 ± 12% +30.5% 106.02 ± 8% sched_debug.cpu.nr_uninterruptible.max
30.10 +53.8% 46.29 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
2408 ± 3% +52.2% 3665 ± 18% sched_debug.cpu.sched_count.stddev
1066 ± 6% +68.1% 1792 ± 17% sched_debug.cpu.sched_goidle.stddev
1413 ± 3% +36.5% 1929 ± 19% sched_debug.cpu.ttwu_count.stddev
553.65 ± 2% -8.6% 505.96 ± 2% sched_debug.cpu.ttwu_local.stddev
3.19 -0.2 2.99 ± 2% perf-stat.i.branch-miss-rate%
7.45 ± 2% -0.7 6.79 ± 2% perf-stat.i.cache-miss-rate%
56644639 -11.5% 50150131 perf-stat.i.cache-misses
5.795e+10 +1.5% 5.883e+10 perf-stat.i.cpu-cycles
0.33 -0.0 0.29 ± 3% perf-stat.i.dTLB-load-miss-rate%
0.10 -0.0 0.09 ± 3% perf-stat.i.dTLB-store-miss-rate%
44.46 -1.5 42.99 ± 2% perf-stat.i.iTLB-load-miss-rate%
3761774 +5.5% 3969797 perf-stat.i.iTLB-load-misses
79.67 -4.1 75.54 perf-stat.i.node-load-miss-rate%
16697015 -17.2% 13819973 perf-stat.i.node-load-misses
2830435 -2.3% 2764205 perf-stat.i.node-loads
60.87 ± 2% -6.0 54.85 ± 3% perf-stat.i.node-store-miss-rate%
3631713 -15.0% 3088593 ± 3% perf-stat.i.node-store-misses
2047418 +7.2% 2195246 perf-stat.i.node-stores
6.61 -0.9 5.71 perf-stat.overall.cache-miss-rate%
44.36 +0.7 45.06 perf-stat.overall.iTLB-load-miss-rate%
11359 -3.1% 11011 perf-stat.overall.instructions-per-iTLB-miss
85.51 -2.2 83.33 perf-stat.overall.node-load-miss-rate%
63.95 -5.5 58.44 perf-stat.overall.node-store-miss-rate%
1.782e+10 -13.1% 1.548e+10 perf-stat.total.cache-misses
5160156 -1.5% 5081874 perf-stat.total.cpu-migrations
1.184e+09 +3.5% 1.225e+09 perf-stat.total.iTLB-load-misses
5.254e+09 -18.8% 4.266e+09 perf-stat.total.node-load-misses
8.905e+08 -4.2% 8.532e+08 perf-stat.total.node-loads
1.143e+09 -16.6% 9.537e+08 ± 3% perf-stat.total.node-store-misses
6.442e+08 +5.2% 6.776e+08 perf-stat.total.node-stores
2.51 -0.3 2.23 perf-profile.calltrace.cycles-pp.copy_page_range.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.07 ± 2% -0.2 0.84 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu
1.04 ± 2% -0.2 0.81 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu
1.37 ± 3% -0.2 1.21 ± 9% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.88 ± 6% -0.1 0.77 ± 6% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
0.81 ± 6% -0.1 0.72 ± 6% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.flush_old_exec
0.60 ± 8% +0.1 0.72 ± 16% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_read_lock_slowpath.do_wait.kernel_wait4.SYSC_wait4
0.17 ±141% +0.4 0.57 ± 3% perf-profile.calltrace.cycles-pp.queued_write_lock_slowpath.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.29 ± 10% +0.4 1.70 ± 12% perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.70 +1.0 26.66 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
4.15 ± 23% -0.9 3.20 ± 7% perf-profile.children.cycles-pp._raw_spin_lock
14.46 -0.5 14.00 ± 2% perf-profile.children.cycles-pp.exit_mmap
10.89 -0.5 10.43 perf-profile.children.cycles-pp._do_fork
2.46 -0.4 2.09 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
3.90 -0.3 3.56 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
3.87 -0.3 3.53 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
10.20 -0.3 9.88 perf-profile.children.cycles-pp.copy_process
2.53 -0.3 2.24 ± 2% perf-profile.children.cycles-pp.copy_page_range
1.64 ± 5% -0.2 1.41 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
1.73 ± 4% -0.2 1.51 perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.29 ± 3% -0.2 1.10 ± 4% perf-profile.children.cycles-pp.__schedule
1.22 ± 8% -0.2 1.04 ± 5% perf-profile.children.cycles-pp.ret_from_fork
0.59 ± 3% -0.1 0.45 ± 5% perf-profile.children.cycles-pp.pick_next_task_fair
0.43 ± 2% -0.1 0.30 ± 6% perf-profile.children.cycles-pp.free_pgd_range
0.64 -0.1 0.51 ± 4% perf-profile.children.cycles-pp.wake_up_new_task
0.30 -0.1 0.18 ± 6% perf-profile.children.cycles-pp.free_unref_page
0.98 ± 2% -0.1 0.86 ± 2% perf-profile.children.cycles-pp.rcu_process_callbacks
0.50 ± 4% -0.1 0.38 ± 4% perf-profile.children.cycles-pp.load_balance
0.51 ± 4% -0.1 0.40 ± 5% perf-profile.children.cycles-pp.__list_del_entry_valid
1.22 -0.1 1.10 ± 4% perf-profile.children.cycles-pp.__softirqentry_text_start
0.39 ± 9% -0.1 0.28 ± 7% perf-profile.children.cycles-pp.schedule_tail
0.29 -0.1 0.18 ± 4% perf-profile.children.cycles-pp.free_pcppages_bulk
0.35 ± 4% -0.1 0.25 ± 3% perf-profile.children.cycles-pp.do_task_dead
0.88 -0.1 0.78 ± 2% perf-profile.children.cycles-pp.select_task_rq_fair
0.35 -0.1 0.26 ± 4% perf-profile.children.cycles-pp.free_unref_page_commit
0.13 ± 7% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.sched_ttwu_pending
1.04 -0.1 0.96 ± 3% perf-profile.children.cycles-pp.kmem_cache_free
0.70 ± 2% -0.1 0.62 ± 3% perf-profile.children.cycles-pp.__pte_alloc
0.21 ± 3% -0.1 0.14 ± 7% perf-profile.children.cycles-pp.idle_cpu
0.32 ± 6% -0.1 0.25 ± 6% perf-profile.children.cycles-pp.find_busiest_group
0.36 ± 4% -0.1 0.29 ± 5% perf-profile.children.cycles-pp.finish_task_switch
0.97 -0.1 0.91 perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
0.32 -0.1 0.25 ± 6% perf-profile.children.cycles-pp.mm_init
0.28 ± 3% -0.1 0.22 ± 4% perf-profile.children.cycles-pp.pgd_alloc
0.14 ± 10% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.free_one_page
0.56 ± 4% -0.1 0.50 ± 4% perf-profile.children.cycles-pp.schedule
0.23 ± 2% -0.1 0.18 ± 8% perf-profile.children.cycles-pp.__get_free_pages
0.81 ± 2% -0.1 0.75 ± 3% perf-profile.children.cycles-pp.__slab_free
0.31 ± 9% -0.1 0.25 ± 7% perf-profile.children.cycles-pp.__put_user_4
0.19 ± 2% -0.1 0.14 ± 5% perf-profile.children.cycles-pp.dup_userfaultfd
0.15 ± 6% -0.1 0.10 ± 9% perf-profile.children.cycles-pp.__free_pages_ok
2.25 -0.0 2.20 perf-profile.children.cycles-pp.anon_vma_clone
0.08 ± 5% -0.0 0.04 ± 60% perf-profile.children.cycles-pp.unfreeze_partials
1.00 -0.0 0.96 perf-profile.children.cycles-pp.sys_write
0.20 ± 4% -0.0 0.16 ± 13% perf-profile.children.cycles-pp.devkmsg_write
0.20 ± 4% -0.0 0.16 ± 13% perf-profile.children.cycles-pp.printk_emit
0.21 ± 3% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.update_curr
0.89 -0.0 0.86 ± 2% perf-profile.children.cycles-pp.__vfs_write
0.09 -0.0 0.06 ± 11% perf-profile.children.cycles-pp.new_slab
0.16 ± 7% -0.0 0.13 ± 11% perf-profile.children.cycles-pp.__mmdrop
0.09 ± 9% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.put_cpu_partial
0.44 -0.0 0.41 perf-profile.children.cycles-pp.remove_vma
0.52 -0.0 0.49 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
0.07 -0.0 0.04 ± 57% perf-profile.children.cycles-pp.lapic_next_deadline
0.20 ± 4% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.__put_task_struct
0.17 ± 7% -0.0 0.15 ± 5% perf-profile.children.cycles-pp.__lock_text_start
0.14 ± 5% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.do_send_sig_info
0.09 -0.0 0.07 perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.13 ± 6% -0.0 0.11 ± 3% perf-profile.children.cycles-pp.put_task_stack
0.09 ± 9% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.__queue_work
0.14 ± 3% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.unmap_single_vma
0.10 -0.0 0.09 ± 4% perf-profile.children.cycles-pp.tcp_sendmsg_locked
0.28 ± 4% +0.0 0.31 ± 2% perf-profile.children.cycles-pp.vfs_statx
0.26 ± 4% +0.0 0.29 ± 3% perf-profile.children.cycles-pp.SYSC_newstat
0.56 ± 3% +0.0 0.59 ± 3% perf-profile.children.cycles-pp.elf_map
0.45 ± 5% +0.0 0.48 ± 3% perf-profile.children.cycles-pp.__wake_up_common
1.12 +0.2 1.32 ± 3% perf-profile.children.cycles-pp.queued_read_lock_slowpath
1.26 ± 2% +0.2 1.49 ± 3% perf-profile.children.cycles-pp.queued_write_lock_slowpath
2.16 ± 2% +0.2 2.39 ± 2% perf-profile.children.cycles-pp.do_wait
2.19 ± 2% +0.2 2.43 ± 2% perf-profile.children.cycles-pp.SYSC_wait4
2.18 ± 2% +0.2 2.42 ± 2% perf-profile.children.cycles-pp.kernel_wait4
25.95 +1.0 26.92 perf-profile.children.cycles-pp.intel_idle
1.46 -0.1 1.31 ± 3% perf-profile.self.cycles-pp.copy_page_range
0.51 ± 4% -0.1 0.40 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.21 ± 3% -0.1 0.14 ± 7% perf-profile.self.cycles-pp.idle_cpu
0.96 -0.1 0.89 perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
0.19 -0.1 0.14 ± 3% perf-profile.self.cycles-pp.dup_userfaultfd
0.35 ± 2% -0.0 0.30 ± 5% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.25 ± 5% -0.0 0.20 ± 6% perf-profile.self.cycles-pp.find_busiest_group
0.28 ± 2% -0.0 0.24 ± 3% perf-profile.self.cycles-pp.unlink_anon_vmas
0.57 -0.0 0.53 ± 3% perf-profile.self.cycles-pp.select_task_rq_fair
0.15 ± 3% -0.0 0.11 ± 7% perf-profile.self.cycles-pp.free_pcppages_bulk
0.55 -0.0 0.52 ± 2% perf-profile.self.cycles-pp.__slab_free
0.09 -0.0 0.07 perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.07 ± 11% -0.0 0.05 perf-profile.self.cycles-pp.update_rq_clock
0.14 ± 6% +0.0 0.16 perf-profile.self.cycles-pp.handle_mm_fault
25.94 +1.0 26.91 perf-profile.self.cycles-pp.intel_idle
2592 ± 11% -20.4% 2062 ± 11% interrupts.CPU1.RES:Rescheduling_interrupts
1584 -20.9% 1254 ± 8% interrupts.CPU10.RES:Rescheduling_interrupts
1405 ± 3% -21.0% 1110 ± 2% interrupts.CPU100.RES:Rescheduling_interrupts
1420 ± 4% -23.8% 1082 ± 10% interrupts.CPU101.RES:Rescheduling_interrupts
1421 ± 2% -19.7% 1141 ± 11% interrupts.CPU102.RES:Rescheduling_interrupts
1501 ± 27% +35.5% 2033 ± 8% interrupts.CPU103.NMI:Non-maskable_interrupts
1501 ± 27% +35.5% 2033 ± 8% interrupts.CPU103.PMI:Performance_monitoring_interrupts
1394 -23.0% 1074 ± 5% interrupts.CPU103.RES:Rescheduling_interrupts
1566 -19.1% 1266 ± 11% interrupts.CPU11.RES:Rescheduling_interrupts
1531 -17.2% 1267 ± 7% interrupts.CPU12.RES:Rescheduling_interrupts
1559 ± 2% -22.6% 1206 ± 8% interrupts.CPU13.RES:Rescheduling_interrupts
1503 -23.4% 1151 ± 6% interrupts.CPU15.RES:Rescheduling_interrupts
1584 ± 3% -24.2% 1201 ± 8% interrupts.CPU16.RES:Rescheduling_interrupts
1528 ± 6% -18.8% 1240 ± 13% interrupts.CPU17.RES:Rescheduling_interrupts
1518 ± 3% -21.1% 1197 ± 6% interrupts.CPU18.RES:Rescheduling_interrupts
2303 ± 11% -19.5% 1854 interrupts.CPU19.NMI:Non-maskable_interrupts
2303 ± 11% -19.5% 1854 interrupts.CPU19.PMI:Performance_monitoring_interrupts
1457 ± 4% -18.0% 1194 ± 4% interrupts.CPU19.RES:Rescheduling_interrupts
1884 ± 5% -15.3% 1596 ± 4% interrupts.CPU2.RES:Rescheduling_interrupts
1543 ± 4% -22.9% 1189 ± 7% interrupts.CPU20.RES:Rescheduling_interrupts
1480 -19.4% 1193 ± 5% interrupts.CPU21.RES:Rescheduling_interrupts
1492 ± 2% -17.5% 1231 ± 6% interrupts.CPU22.RES:Rescheduling_interrupts
1482 ± 2% -17.0% 1230 ± 7% interrupts.CPU24.RES:Rescheduling_interrupts
1434 ± 3% -17.4% 1184 ± 6% interrupts.CPU25.RES:Rescheduling_interrupts
1568 ± 4% -12.7% 1368 ± 4% interrupts.CPU26.RES:Rescheduling_interrupts
1544 -16.5% 1289 ± 3% interrupts.CPU27.RES:Rescheduling_interrupts
1486 ± 3% -16.6% 1238 ± 5% interrupts.CPU29.RES:Rescheduling_interrupts
1856 ± 3% -14.3% 1591 ± 8% interrupts.CPU3.RES:Rescheduling_interrupts
1507 -18.8% 1224 ± 9% interrupts.CPU30.RES:Rescheduling_interrupts
1561 ± 2% -19.9% 1250 ± 3% interrupts.CPU31.RES:Rescheduling_interrupts
1551 -23.4% 1187 ± 3% interrupts.CPU32.RES:Rescheduling_interrupts
1449 ± 4% -16.6% 1208 ± 9% interrupts.CPU33.RES:Rescheduling_interrupts
1521 ± 2% -21.6% 1193 ± 6% interrupts.CPU34.RES:Rescheduling_interrupts
1644 ± 15% -26.2% 1212 ± 7% interrupts.CPU35.RES:Rescheduling_interrupts
1498 -22.5% 1161 ± 2% interrupts.CPU36.RES:Rescheduling_interrupts
1487 ± 3% -19.8% 1192 ± 4% interrupts.CPU37.RES:Rescheduling_interrupts
1538 ± 4% -25.0% 1153 ± 5% interrupts.CPU38.RES:Rescheduling_interrupts
1486 -20.6% 1181 ± 7% interrupts.CPU39.RES:Rescheduling_interrupts
1488 ± 2% -20.2% 1187 ± 3% interrupts.CPU40.RES:Rescheduling_interrupts
1503 -22.3% 1168 ± 10% interrupts.CPU41.RES:Rescheduling_interrupts
1560 ± 23% +24.5% 1942 ± 4% interrupts.CPU43.NMI:Non-maskable_interrupts
1560 ± 23% +24.5% 1942 ± 4% interrupts.CPU43.PMI:Performance_monitoring_interrupts
1654 ± 7% -26.5% 1216 ± 4% interrupts.CPU43.RES:Rescheduling_interrupts
1501 ± 5% -23.5% 1148 ± 4% interrupts.CPU44.RES:Rescheduling_interrupts
1473 ± 3% -21.0% 1164 ± 7% interrupts.CPU45.RES:Rescheduling_interrupts
1424 ± 3% -18.0% 1167 ± 6% interrupts.CPU46.RES:Rescheduling_interrupts
1481 -25.1% 1109 ± 4% interrupts.CPU47.RES:Rescheduling_interrupts
1436 -19.8% 1152 ± 4% interrupts.CPU48.RES:Rescheduling_interrupts
1688 ± 2% -20.2% 1347 ± 8% interrupts.CPU5.RES:Rescheduling_interrupts
1440 ± 2% -20.9% 1139 ± 7% interrupts.CPU50.RES:Rescheduling_interrupts
1462 -23.5% 1118 ± 7% interrupts.CPU51.RES:Rescheduling_interrupts
1410 ± 2% -14.7% 1203 ± 5% interrupts.CPU52.RES:Rescheduling_interrupts
1524 ± 2% -24.6% 1149 ± 5% interrupts.CPU53.RES:Rescheduling_interrupts
1438 -16.5% 1201 ± 9% interrupts.CPU54.RES:Rescheduling_interrupts
1454 ± 2% -19.5% 1170 ± 6% interrupts.CPU55.RES:Rescheduling_interrupts
1468 -20.1% 1173 ± 4% interrupts.CPU56.RES:Rescheduling_interrupts
1461 -20.6% 1159 ± 4% interrupts.CPU57.RES:Rescheduling_interrupts
1410 ± 2% -18.1% 1155 ± 4% interrupts.CPU58.RES:Rescheduling_interrupts
1452 ± 3% -19.0% 1176 ± 6% interrupts.CPU59.RES:Rescheduling_interrupts
1621 ± 4% -16.3% 1357 ± 5% interrupts.CPU6.RES:Rescheduling_interrupts
1455 ± 2% -22.7% 1124 ± 8% interrupts.CPU60.RES:Rescheduling_interrupts
1491 ± 3% -25.8% 1106 ± 11% interrupts.CPU61.RES:Rescheduling_interrupts
1401 -18.4% 1143 ± 3% interrupts.CPU62.RES:Rescheduling_interrupts
1429 -19.4% 1152 ± 9% interrupts.CPU63.RES:Rescheduling_interrupts
1437 -22.8% 1109 ± 8% interrupts.CPU64.RES:Rescheduling_interrupts
1499 -26.4% 1104 ± 7% interrupts.CPU65.RES:Rescheduling_interrupts
1485 ± 4% -23.0% 1144 ± 7% interrupts.CPU66.RES:Rescheduling_interrupts
1405 ± 3% -19.0% 1138 ± 8% interrupts.CPU67.RES:Rescheduling_interrupts
1492 ± 2% -22.4% 1159 ± 12% interrupts.CPU68.RES:Rescheduling_interrupts
1435 ± 4% -19.9% 1149 ± 14% interrupts.CPU69.RES:Rescheduling_interrupts
1625 ± 3% -15.6% 1371 ± 6% interrupts.CPU7.RES:Rescheduling_interrupts
1480 ± 3% -21.4% 1164 ± 12% interrupts.CPU70.RES:Rescheduling_interrupts
2355 ± 10% -30.9% 1627 ± 25% interrupts.CPU71.NMI:Non-maskable_interrupts
2355 ± 10% -30.9% 1627 ± 25% interrupts.CPU71.PMI:Performance_monitoring_interrupts
1428 ± 3% -19.4% 1151 ± 8% interrupts.CPU71.RES:Rescheduling_interrupts
1427 ± 2% -19.7% 1145 ± 9% interrupts.CPU72.RES:Rescheduling_interrupts
1452 ± 4% -17.5% 1198 ± 7% interrupts.CPU73.RES:Rescheduling_interrupts
1419 ± 2% -19.0% 1149 ± 6% interrupts.CPU74.RES:Rescheduling_interrupts
1441 ± 2% -18.4% 1176 ± 9% interrupts.CPU75.RES:Rescheduling_interrupts
1435 ± 3% -16.1% 1204 ± 6% interrupts.CPU76.RES:Rescheduling_interrupts
1445 -22.2% 1124 ± 6% interrupts.CPU77.RES:Rescheduling_interrupts
1481 ± 4% -23.8% 1128 ± 8% interrupts.CPU78.RES:Rescheduling_interrupts
1392 -20.7% 1104 ± 9% interrupts.CPU79.RES:Rescheduling_interrupts
1621 ± 4% -22.7% 1252 ± 9% interrupts.CPU8.RES:Rescheduling_interrupts
1478 ± 2% -24.3% 1118 ± 6% interrupts.CPU80.RES:Rescheduling_interrupts
1481 ± 4% -23.2% 1137 ± 8% interrupts.CPU81.RES:Rescheduling_interrupts
1453 ± 2% -20.8% 1151 ± 4% interrupts.CPU82.RES:Rescheduling_interrupts
1431 -22.5% 1110 ± 10% interrupts.CPU83.RES:Rescheduling_interrupts
1477 ± 4% -25.9% 1094 ± 7% interrupts.CPU84.RES:Rescheduling_interrupts
1467 ± 2% -21.4% 1153 ± 6% interrupts.CPU85.RES:Rescheduling_interrupts
1427 ± 3% -20.1% 1140 ± 12% interrupts.CPU86.RES:Rescheduling_interrupts
1512 ± 5% -25.5% 1126 ± 5% interrupts.CPU87.RES:Rescheduling_interrupts
1409 -20.8% 1115 ± 5% interrupts.CPU88.RES:Rescheduling_interrupts
1408 ± 2% -18.8% 1143 ± 5% interrupts.CPU89.RES:Rescheduling_interrupts
1659 ± 2% -22.0% 1294 ± 11% interrupts.CPU9.RES:Rescheduling_interrupts
1475 ± 3% -23.7% 1126 ± 5% interrupts.CPU90.RES:Rescheduling_interrupts
1444 -22.8% 1114 ± 7% interrupts.CPU91.RES:Rescheduling_interrupts
1442 ± 6% -21.3% 1135 ± 7% interrupts.CPU92.RES:Rescheduling_interrupts
1466 ± 2% -21.7% 1148 ± 2% interrupts.CPU93.RES:Rescheduling_interrupts
1413 ± 2% -17.7% 1163 ± 6% interrupts.CPU94.RES:Rescheduling_interrupts
1611 ± 3% -29.8% 1131 ± 5% interrupts.CPU95.RES:Rescheduling_interrupts
1406 -20.1% 1123 ± 5% interrupts.CPU96.RES:Rescheduling_interrupts
1386 ± 3% -20.0% 1109 ± 9% interrupts.CPU97.RES:Rescheduling_interrupts
1406 ± 4% -21.6% 1102 ± 7% interrupts.CPU98.RES:Rescheduling_interrupts
1379 ± 4% -18.9% 1118 ± 8% interrupts.CPU99.RES:Rescheduling_interrupts
163356 -19.1% 132229 interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/alltests/reaim/0x200004d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_error_entry/0x
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
1:4 9% 1:4 perf-profile.children.cycles-pp.error_entry
0:4 4% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
8.37 -4.0% 8.03 reaim.child_systime
0.42 +2.4% 0.43 reaim.std_dev_percent
294600 -7.7% 271774 reaim.time.involuntary_context_switches
329.71 -4.0% 316.42 reaim.time.system_time
1675279 +1.3% 1696674 reaim.time.voluntary_context_switches
4.516e+08 ± 6% +17.8% 5.322e+08 ± 10% cpuidle.POLL.time
200189 ± 3% +10.4% 220923 meminfo.AnonHugePages
154.84 +0.8% 156.13 turbostat.PkgWatt
64.74 ± 20% +95.3% 126.45 ± 19% boot-time.boot
6459 ± 21% +99.3% 12874 ± 20% boot-time.idle
40.50 ± 43% +96.3% 79.50 ± 32% numa-vmstat.node1.nr_anon_transparent_hugepages
12122 ± 7% -20.2% 9674 ± 14% numa-vmstat.node1.nr_slab_reclaimable
153053 ± 3% +10.6% 169298 ± 7% numa-meminfo.node0.Slab
84083 ± 42% +95.4% 164331 ± 32% numa-meminfo.node1.AnonHugePages
48467 ± 7% -20.2% 38679 ± 14% numa-meminfo.node1.SReclaimable
97.25 ± 3% +10.8% 107.75 proc-vmstat.nr_anon_transparent_hugepages
26303 -1.8% 25827 proc-vmstat.nr_kernel_stack
4731 -5.2% 4485 ± 3% proc-vmstat.nr_page_table_pages
2463 ±134% -91.1% 220.50 ± 98% proc-vmstat.numa_hint_faults
16885 ± 6% -9.6% 15258 ± 4% softirqs.CPU32.RCU
30855 ± 13% -14.3% 26453 softirqs.CPU4.TIMER
17409 ± 7% -12.5% 15231 softirqs.CPU51.RCU
18157 ± 9% -12.9% 15809 ± 2% softirqs.CPU55.RCU
16588 ± 10% -11.6% 14662 ± 2% softirqs.CPU78.RCU
17307 ± 7% -12.6% 15130 ± 2% softirqs.CPU87.RCU
17146 ± 9% -13.2% 14880 ± 2% softirqs.CPU90.RCU
2593 ± 6% -17.3% 2145 ± 10% slabinfo.Acpi-ParseExt.active_objs
2593 ± 6% -17.3% 2145 ± 10% slabinfo.Acpi-ParseExt.num_objs
682.50 ± 6% -17.1% 565.50 ± 5% slabinfo.bdev_cache.active_objs
682.50 ± 6% -17.1% 565.50 ± 5% slabinfo.bdev_cache.num_objs
6102 ± 5% -30.6% 4234 ± 13% slabinfo.eventpoll_epi.active_objs
6102 ± 5% -30.6% 4234 ± 13% slabinfo.eventpoll_epi.num_objs
5340 ± 5% -30.6% 3704 ± 13% slabinfo.eventpoll_pwq.active_objs
5340 ± 5% -30.6% 3704 ± 13% slabinfo.eventpoll_pwq.num_objs
1018 ± 9% -17.6% 839.00 ± 2% slabinfo.file_lock_cache.active_objs
1018 ± 9% -17.6% 839.00 ± 2% slabinfo.file_lock_cache.num_objs
3359 ± 7% -12.7% 2933 ± 8% slabinfo.fsnotify_mark_connector.active_objs
3359 ± 7% -12.7% 2933 ± 8% slabinfo.fsnotify_mark_connector.num_objs
1485 ± 3% -10.9% 1323 ± 6% slabinfo.nsproxy.active_objs
1485 ± 3% -10.9% 1323 ± 6% slabinfo.nsproxy.num_objs
1.67 -0.0 1.64 perf-stat.branch-miss-rate%
1.383e+10 -1.6% 1.361e+10 perf-stat.branch-misses
7.52 -1.0 6.54 perf-stat.cache-miss-rate%
3.062e+09 -13.7% 2.643e+09 perf-stat.cache-misses
1.041e+13 +2.4% 1.066e+13 perf-stat.cpu-cycles
744682 -1.2% 735818 perf-stat.cpu-migrations
4.046e+08 +3.2% 4.177e+08 perf-stat.iTLB-load-misses
4.821e+08 +3.3% 4.98e+08 perf-stat.iTLB-loads
16668 -2.8% 16200 perf-stat.instructions-per-iTLB-miss
87.92 -2.1 85.80 perf-stat.node-load-miss-rate%
8.016e+08 -20.8% 6.351e+08 perf-stat.node-load-misses
1.102e+08 -4.6% 1.051e+08 perf-stat.node-loads
59.53 -6.5 53.07 perf-stat.node-store-miss-rate%
1.435e+08 -24.7% 1.081e+08 ± 2% perf-stat.node-store-misses
97539783 -2.0% 95550500 perf-stat.node-stores
552.25 ± 27% +67.3% 923.75 ± 24% interrupts.CPU10.NMI:Non-maskable_interrupts
552.25 ± 27% +67.3% 923.75 ± 24% interrupts.CPU10.PMI:Performance_monitoring_interrupts
455.50 +32.9% 605.50 ± 19% interrupts.CPU15.RES:Rescheduling_interrupts
361.75 ± 6% +58.9% 574.75 ± 23% interrupts.CPU26.RES:Rescheduling_interrupts
321.25 ± 7% +22.0% 392.00 ± 6% interrupts.CPU30.RES:Rescheduling_interrupts
278.25 ± 9% +18.1% 328.75 ± 13% interrupts.CPU41.RES:Rescheduling_interrupts
746.75 ± 11% +60.5% 1198 ± 37% interrupts.CPU44.NMI:Non-maskable_interrupts
746.75 ± 11% +60.5% 1198 ± 37% interrupts.CPU44.PMI:Performance_monitoring_interrupts
645.25 ± 32% +43.0% 922.50 ± 13% interrupts.CPU47.NMI:Non-maskable_interrupts
645.25 ± 32% +43.0% 922.50 ± 13% interrupts.CPU47.PMI:Performance_monitoring_interrupts
631.25 ± 23% +37.4% 867.25 ± 12% interrupts.CPU58.NMI:Non-maskable_interrupts
631.25 ± 23% +37.4% 867.25 ± 12% interrupts.CPU58.PMI:Performance_monitoring_interrupts
713.50 ± 12% +22.2% 871.75 ± 10% interrupts.CPU65.NMI:Non-maskable_interrupts
713.50 ± 12% +22.2% 871.75 ± 10% interrupts.CPU65.PMI:Performance_monitoring_interrupts
620.00 ± 14% +95.4% 1211 ± 56% interrupts.CPU72.NMI:Non-maskable_interrupts
620.00 ± 14% +95.4% 1211 ± 56% interrupts.CPU72.PMI:Performance_monitoring_interrupts
620.75 ± 30% +72.8% 1072 ± 33% interrupts.CPU83.NMI:Non-maskable_interrupts
620.75 ± 30% +72.8% 1072 ± 33% interrupts.CPU83.PMI:Performance_monitoring_interrupts
779.83 ± 4% +53.9% 1200 ± 16% sched_debug.cfs_rq:/.exec_clock.stddev
43531 ± 4% +69.1% 73628 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
1.00 +9.3% 1.09 ± 3% sched_debug.cfs_rq:/.nr_spread_over.avg
2.39 ± 75% +256.7% 8.54 ± 53% sched_debug.cfs_rq:/.removed.load_avg.avg
16.68 ± 62% +171.4% 45.26 ± 35% sched_debug.cfs_rq:/.removed.load_avg.stddev
110.67 ± 75% +254.3% 392.14 ± 53% sched_debug.cfs_rq:/.removed.runnable_sum.avg
771.19 ± 62% +170.1% 2082 ± 35% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
43530 ± 4% +69.1% 73628 ± 7% sched_debug.cfs_rq:/.spread0.stddev
216468 ± 5% +36.6% 295649 ± 6% sched_debug.cpu.clock.avg
216487 ± 5% +36.6% 295669 ± 6% sched_debug.cpu.clock.max
216210 ± 5% +36.6% 295436 ± 6% sched_debug.cpu.clock.min
216468 ± 5% +36.6% 295649 ± 6% sched_debug.cpu.clock_task.avg
216487 ± 5% +36.6% 295669 ± 6% sched_debug.cpu.clock_task.max
216210 ± 5% +36.6% 295436 ± 6% sched_debug.cpu.clock_task.min
1593 ± 11% -7.6% 1473 ± 5% sched_debug.cpu.curr->pid.avg
34247 +11.9% 38310 ± 6% sched_debug.cpu.nr_switches.max
2145 ± 2% +19.3% 2559 ± 5% sched_debug.cpu.nr_switches.stddev
31588 +16.2% 36707 ± 6% sched_debug.cpu.sched_count.max
1791 ± 3% +27.7% 2288 ± 7% sched_debug.cpu.sched_count.stddev
13661 +18.6% 16205 ± 8% sched_debug.cpu.sched_goidle.max
838.65 ± 4% +25.7% 1054 ± 9% sched_debug.cpu.sched_goidle.stddev
12457 ± 3% +33.1% 16586 ± 13% sched_debug.cpu.ttwu_count.max
887.04 ± 3% +37.4% 1218 ± 13% sched_debug.cpu.ttwu_count.stddev
264.20 ± 4% +25.9% 332.56 ± 11% sched_debug.cpu.ttwu_local.stddev
216473 ± 5% +36.6% 295655 ± 6% sched_debug.cpu_clk
216473 ± 5% +36.6% 295655 ± 6% sched_debug.ktime
217713 ± 5% +36.3% 296841 ± 6% sched_debug.sched_clk
0.54 ± 4% +0.1 0.67 ± 8% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.60 ± 5% +0.3 1.89 ± 9% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
1.40 ± 12% +0.9 2.29 ± 17% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter
1.60 ± 12% +1.0 2.60 ± 17% perf-profile.calltrace.cycles-pp._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt
1.61 ± 12% +1.0 2.62 ± 17% perf-profile.calltrace.cycles-pp.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
3.06 ± 6% +1.3 4.36 ± 12% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
3.13 ± 5% +1.3 4.45 ± 12% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
5.58 ± 2% +1.7 7.32 ± 12% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
5.60 ± 2% +1.8 7.36 ± 12% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.07 ± 17% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.10 ± 4% +0.0 0.13 ± 11% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.10 ± 4% +0.0 0.14 ± 12% perf-profile.children.cycles-pp.vfs_statx
0.09 ± 4% +0.0 0.12 ± 13% perf-profile.children.cycles-pp.SYSC_newstat
0.06 ± 14% +0.0 0.09 ± 24% perf-profile.children.cycles-pp.__vmalloc_node_range
0.11 ± 14% +0.0 0.14 ± 25% perf-profile.children.cycles-pp.tlb_flush_mmu_tlbonly
0.06 ± 63% +0.0 0.10 ± 18% perf-profile.children.cycles-pp.terminate_walk
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.find_vmap_area
0.01 ±173% +0.1 0.08 ± 11% perf-profile.children.cycles-pp.remove_vm_area
0.00 +0.1 0.07 ± 38% perf-profile.children.cycles-pp.__get_vm_area_node
0.18 ± 11% +0.1 0.26 ± 15% perf-profile.children.cycles-pp.creat
0.58 ± 5% +0.1 0.73 ± 8% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
1.57 ± 11% +1.0 2.52 ± 16% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.66 ± 14% +1.0 2.67 ± 16% perf-profile.children.cycles-pp.tick_do_update_jiffies64
2.28 ± 7% +1.1 3.35 ± 14% perf-profile.children.cycles-pp._raw_spin_lock
3.15 ± 7% +1.3 4.44 ± 12% perf-profile.children.cycles-pp.tick_irq_enter
3.21 ± 6% +1.3 4.53 ± 12% perf-profile.children.cycles-pp.irq_enter
6.39 +1.8 8.22 ± 11% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
6.42 +1.9 8.29 ± 11% perf-profile.children.cycles-pp.apic_timer_interrupt
0.07 ± 13% +0.0 0.08 ± 13% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.12 ± 22% +0.0 0.16 ± 21% perf-profile.self.cycles-pp.do_syscall_64
0.27 ± 8% +0.1 0.35 ± 12% perf-profile.self.cycles-pp.tick_irq_enter
0.55 ± 4% +0.1 0.69 ± 9% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
1.57 ± 11% +1.0 2.52 ± 16% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/50%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/pft/0x200004d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_error_entry/0x
1:4 -25% :4 dmesg.WARNING:at#for_ip_ret_from_intr/0x
%stddev %change %stddev
\ | \
230070 -42.7% 131828 pft.faults_per_sec_per_cpu
24154 -4.1% 23160 ± 3% pft.time.involuntary_context_switches
7817260 -32.7% 5262150 pft.time.minor_page_faults
4048 +18.0% 4778 pft.time.percent_of_cpu_this_job_got
11917 +16.9% 13933 pft.time.system_time
250.86 +73.7% 435.77 ± 10% pft.time.user_time
142515 +59.4% 227173 ± 5% pft.time.voluntary_context_switches
7334244 ± 7% -64.3% 2618153 ± 65% numa-numastat.node1.local_node
7344576 ± 7% -64.2% 2628307 ± 65% numa-numastat.node1.numa_hit
59.50 -11.3% 52.75 vmstat.cpu.id
39.75 +20.1% 47.75 vmstat.procs.r
59.92 -6.7 53.19 mpstat.cpu.idle%
0.00 ± 26% +0.0 0.01 ± 7% mpstat.cpu.soft%
39.08 +6.1 45.22 mpstat.cpu.sys%
0.99 +0.6 1.58 ± 10% mpstat.cpu.usr%
4673420 ± 11% +30.2% 6084101 ± 13% cpuidle.C1.time
2.712e+08 ± 5% +30.1% 3.529e+08 ± 2% cpuidle.C1E.time
624111 ± 2% +36.3% 850658 ± 4% cpuidle.C1E.usage
1.791e+10 -12.1% 1.574e+10 cpuidle.C6.time
18607622 -13.1% 16169546 cpuidle.C6.usage
1.361e+08 ± 19% +118.3% 2.971e+08 ± 13% cpuidle.POLL.time
5103004 ± 2% +26.6% 6459744 meminfo.Active
5067740 ± 2% +27.0% 6436925 meminfo.Active(anon)
756326 ± 8% +124.1% 1695158 ± 25% meminfo.AnonHugePages
4887369 ± 3% +26.7% 6193791 meminfo.AnonPages
5.41e+08 +19.0% 6.437e+08 meminfo.Committed_AS
6892 +33.2% 9179 ± 10% meminfo.PageTables
574999 ± 9% +115.7% 1240273 ± 28% numa-vmstat.node0.nr_active_anon
554302 ± 7% +116.5% 1200168 ± 29% numa-vmstat.node0.nr_anon_pages
574609 ± 9% +115.8% 1240281 ± 28% numa-vmstat.node0.nr_zone_active_anon
10839 ± 9% -25.3% 8092 ± 8% numa-vmstat.node1.nr_slab_reclaimable
4140994 ± 7% -49.5% 2091526 ± 62% numa-vmstat.node1.numa_hit
3996598 ± 7% -51.3% 1947593 ± 68% numa-vmstat.node1.numa_local
2314942 ± 12% +115.3% 4985067 ± 27% numa-meminfo.node0.Active
2297543 ± 13% +116.5% 4973510 ± 27% numa-meminfo.node0.Active(anon)
359259 ± 13% +249.8% 1256563 ± 42% numa-meminfo.node0.AnonHugePages
2191741 ± 12% +118.8% 4794936 ± 28% numa-meminfo.node0.AnonPages
3351366 ± 9% +80.5% 6050637 ± 27% numa-meminfo.node0.MemUsed
3133 ± 9% +91.1% 5990 ± 32% numa-meminfo.node0.PageTables
43356 ± 9% -25.3% 32370 ± 8% numa-meminfo.node1.SReclaimable
1177 +15.6% 1360 turbostat.Avg_MHz
42.29 +6.6 48.84 turbostat.Busy%
602767 ± 3% +35.6% 817443 ± 2% turbostat.C1E
0.86 ± 6% +0.3 1.11 turbostat.C1E%
18604760 -13.1% 16166704 turbostat.C6
57.00 -6.9 50.14 turbostat.C6%
56.09 -12.8% 48.90 ± 4% turbostat.CPU%c1
255.40 -8.2% 234.42 turbostat.PkgWatt
65.03 -17.0% 53.98 turbostat.RAMWatt
6202 ± 2% -28.7% 4424 ± 6% slabinfo.eventpoll_epi.active_objs
6202 ± 2% -28.7% 4424 ± 6% slabinfo.eventpoll_epi.num_objs
5427 ± 2% -28.7% 3871 ± 6% slabinfo.eventpoll_pwq.active_objs
5427 ± 2% -28.7% 3871 ± 6% slabinfo.eventpoll_pwq.num_objs
14969 ± 4% -21.6% 11739 ± 3% slabinfo.files_cache.active_objs
15026 ± 4% -21.3% 11821 ± 3% slabinfo.files_cache.num_objs
5869 -25.8% 4356 ± 11% slabinfo.mm_struct.active_objs
5933 -25.8% 4402 ± 12% slabinfo.mm_struct.num_objs
6573 ± 2% -19.3% 5303 ± 12% slabinfo.sighand_cache.active_objs
6602 ± 2% -19.1% 5339 ± 12% slabinfo.sighand_cache.num_objs
1424 ± 6% -17.5% 1175 ± 3% slabinfo.skbuff_fclone_cache.active_objs
1424 ± 6% -17.5% 1175 ± 3% slabinfo.skbuff_fclone_cache.num_objs
1804 ± 4% -12.8% 1573 ± 8% slabinfo.task_group.active_objs
1804 ± 4% -12.8% 1573 ± 8% slabinfo.task_group.num_objs
74.90 -7.1 67.78 perf-stat.cache-miss-rate%
1.917e+10 -30.7% 1.329e+10 ± 2% perf-stat.cache-misses
2.559e+10 -23.4% 1.96e+10 ± 2% perf-stat.cache-references
3.608e+13 +15.1% 4.153e+13 perf-stat.cpu-cycles
10929 -21.3% 8606 ± 15% perf-stat.cpu-migrations
0.08 -0.0 0.05 ± 13% perf-stat.dTLB-store-miss-rate%
1.187e+08 -48.3% 61361329 ± 7% perf-stat.dTLB-store-misses
1.41e+11 -12.2% 1.238e+11 ± 6% perf-stat.dTLB-stores
2.149e+08 -30.2% 1.5e+08 ± 7% perf-stat.iTLB-loads
8598974 -29.6% 6054449 perf-stat.minor-faults
1.421e+08 ± 3% -21.4% 1.117e+08 ± 6% perf-stat.node-load-misses
1.5e+09 -27.5% 1.088e+09 perf-stat.node-loads
0.85 ± 70% -0.5 0.38 ± 13% perf-stat.node-store-miss-rate%
98747708 ± 70% -67.5% 32046709 ± 13% perf-stat.node-store-misses
1.146e+10 -26.0% 8.481e+09 perf-stat.node-stores
8598995 -29.6% 6054463 perf-stat.page-faults
1300724 ± 2% +22.6% 1594074 proc-vmstat.nr_active_anon
1256932 ± 2% +22.5% 1539865 proc-vmstat.nr_anon_pages
406.25 ± 8% +79.9% 731.00 ± 14% proc-vmstat.nr_anon_transparent_hugepages
1461780 -2.2% 1429319 proc-vmstat.nr_dirty_background_threshold
2927135 -2.2% 2862134 proc-vmstat.nr_dirty_threshold
14564117 -1.9% 14293825 proc-vmstat.nr_free_pages
8545 +6.6% 9109 ± 2% proc-vmstat.nr_mapped
1771 ± 2% +23.5% 2188 ± 5% proc-vmstat.nr_page_table_pages
1300723 ± 2% +22.6% 1594070 proc-vmstat.nr_zone_active_anon
13839779 -30.8% 9576432 proc-vmstat.numa_hit
13819218 -30.9% 9555852 proc-vmstat.numa_local
2.725e+09 -32.6% 1.837e+09 proc-vmstat.pgalloc_normal
8634919 -29.6% 6080480 proc-vmstat.pgfault
2.725e+09 -32.6% 1.836e+09 proc-vmstat.pgfree
5304368 -32.6% 3574443 proc-vmstat.thp_deferred_split_page
5305872 -32.6% 3575998 proc-vmstat.thp_fault_alloc
402915 +1.3% 408011 interrupts.CAL:Function_call_interrupts
186176 ± 6% -21.8% 145644 ± 12% interrupts.CPU0.RES:Rescheduling_interrupts
12448 ± 16% -71.5% 3551 ± 58% interrupts.CPU2.RES:Rescheduling_interrupts
334.25 ± 58% +181.7% 941.50 ± 24% interrupts.CPU21.RES:Rescheduling_interrupts
202.75 ± 50% +1023.3% 2277 ± 98% interrupts.CPU22.RES:Rescheduling_interrupts
138.50 ± 59% +1086.3% 1643 ± 36% interrupts.CPU23.RES:Rescheduling_interrupts
179.00 ± 55% +910.3% 1808 ±106% interrupts.CPU25.RES:Rescheduling_interrupts
485.50 ± 29% +8854.8% 43475 ± 37% interrupts.CPU26.RES:Rescheduling_interrupts
248.75 ± 38% +1876.2% 4915 ± 52% interrupts.CPU27.RES:Rescheduling_interrupts
116.75 ± 12% +297.9% 464.50 ± 11% interrupts.CPU29.RES:Rescheduling_interrupts
8061 ± 28% -54.5% 3669 ± 61% interrupts.CPU3.RES:Rescheduling_interrupts
3674 ± 6% -57.9% 1546 ± 95% interrupts.CPU31.NMI:Non-maskable_interrupts
3674 ± 6% -57.9% 1546 ± 95% interrupts.CPU31.PMI:Performance_monitoring_interrupts
79.25 ± 40% -56.8% 34.25 ± 97% interrupts.CPU50.RES:Rescheduling_interrupts
86.75 ± 56% -77.8% 19.25 ± 71% interrupts.CPU51.RES:Rescheduling_interrupts
669.25 ± 78% -73.4% 177.75 ±155% interrupts.CPU53.RES:Rescheduling_interrupts
498.25 ± 80% -95.1% 24.50 ± 37% interrupts.CPU55.RES:Rescheduling_interrupts
238.00 ± 58% -82.7% 41.25 ± 81% interrupts.CPU58.RES:Rescheduling_interrupts
278.50 ± 28% -92.8% 20.00 ± 52% interrupts.CPU59.RES:Rescheduling_interrupts
256.75 ± 47% -90.4% 24.75 ± 50% interrupts.CPU60.RES:Rescheduling_interrupts
225.25 ± 71% -91.2% 19.75 ± 27% interrupts.CPU61.RES:Rescheduling_interrupts
236.00 ± 92% -88.2% 27.75 ± 80% interrupts.CPU63.RES:Rescheduling_interrupts
171.25 ± 73% -91.2% 15.00 ± 22% interrupts.CPU64.RES:Rescheduling_interrupts
239.00 ± 36% -76.4% 56.50 ±130% interrupts.CPU65.RES:Rescheduling_interrupts
196.75 ± 51% -89.8% 20.00 ± 15% interrupts.CPU66.RES:Rescheduling_interrupts
196.50 ± 53% -78.1% 43.00 ±111% interrupts.CPU70.RES:Rescheduling_interrupts
191.00 ± 45% -90.2% 18.75 ± 45% interrupts.CPU71.RES:Rescheduling_interrupts
203.25 ± 81% -93.7% 12.75 ± 23% interrupts.CPU72.RES:Rescheduling_interrupts
103.25 ± 59% -78.9% 21.75 ± 24% interrupts.CPU73.RES:Rescheduling_interrupts
111.25 ± 79% -80.9% 21.25 ± 76% interrupts.CPU74.RES:Rescheduling_interrupts
93.50 ±106% -78.1% 20.50 ± 67% interrupts.CPU78.RES:Rescheduling_interrupts
400.50 ±155% -97.0% 12.00 ± 21% interrupts.CPU81.RES:Rescheduling_interrupts
347.50 ±156% -96.3% 13.00 ± 75% interrupts.CPU82.RES:Rescheduling_interrupts
285.00 ±149% -96.7% 9.50 ± 49% interrupts.CPU83.RES:Rescheduling_interrupts
265.00 ±136% -94.2% 15.25 ± 48% interrupts.CPU84.RES:Rescheduling_interrupts
153.50 ±145% -89.7% 15.75 ± 63% interrupts.CPU85.RES:Rescheduling_interrupts
167.00 ±101% -91.0% 15.00 ± 54% interrupts.CPU87.RES:Rescheduling_interrupts
81.50 ± 79% -87.4% 10.25 ± 78% interrupts.CPU88.RES:Rescheduling_interrupts
114.00 ± 92% -79.2% 23.75 ±103% interrupts.CPU98.RES:Rescheduling_interrupts
54567 ± 11% +54.4% 84277 ± 12% sched_debug.cfs_rq:/.exec_clock.avg
102845 ± 10% +41.8% 145786 ± 2% sched_debug.cfs_rq:/.exec_clock.max
12134 ± 15% +286.2% 46865 ± 36% sched_debug.cfs_rq:/.exec_clock.stddev
190887 ± 88% -83.1% 32327 ± 43% sched_debug.cfs_rq:/.load.max
26347 ± 56% -61.1% 10240 ± 19% sched_debug.cfs_rq:/.load.stddev
2704219 ± 11% +61.5% 4367328 ± 12% sched_debug.cfs_rq:/.min_vruntime.avg
4650229 ± 8% +60.8% 7478802 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
546421 ± 15% +343.4% 2422973 ± 37% sched_debug.cfs_rq:/.min_vruntime.stddev
0.41 ± 4% +38.1% 0.56 ± 8% sched_debug.cfs_rq:/.nr_running.avg
1.65 ± 14% +40.8% 2.32 ± 21% sched_debug.cfs_rq:/.nr_spread_over.avg
1.07 ± 23% +49.1% 1.59 ± 20% sched_debug.cfs_rq:/.nr_spread_over.stddev
8.96 ± 5% +21.6% 10.90 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.avg
12.33 ± 22% -28.3% 8.84 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.stddev
188871 ± 89% -83.6% 30944 ± 47% sched_debug.cfs_rq:/.runnable_weight.max
26233 ± 57% -61.0% 10219 ± 20% sched_debug.cfs_rq:/.runnable_weight.stddev
3417321 ± 11% +37.6% 4701239 ± 20% sched_debug.cfs_rq:/.spread0.max
-122012 +1597.6% -2071325 sched_debug.cfs_rq:/.spread0.min
546405 ± 15% +343.4% 2422990 ± 37% sched_debug.cfs_rq:/.spread0.stddev
425.53 ± 5% +34.3% 571.59 ± 6% sched_debug.cfs_rq:/.util_avg.avg
172409 ± 17% -51.9% 83000 ± 34% sched_debug.cpu.avg_idle.min
15.80 ± 35% -55.0% 7.11 ± 31% sched_debug.cpu.clock.stddev
15.80 ± 35% -55.0% 7.11 ± 31% sched_debug.cpu.clock_task.stddev
11.99 ± 20% -22.2% 9.34 ± 4% sched_debug.cpu.cpu_load[0].stddev
25155 ± 9% -18.1% 20603 sched_debug.cpu.curr->pid.max
11971 ± 8% -16.4% 10012 ± 2% sched_debug.cpu.curr->pid.stddev
139384 ± 77% -46.1% 75152 ±117% sched_debug.cpu.load.max
110514 ± 11% +33.6% 147689 ± 4% sched_debug.cpu.nr_load_updates.max
9429 ± 13% +240.3% 32086 ± 33% sched_debug.cpu.nr_load_updates.stddev
6703 ± 11% +31.8% 8832 ± 9% sched_debug.cpu.nr_switches.avg
42299 ± 4% +297.2% 168004 ± 44% sched_debug.cpu.nr_switches.max
6781 ± 7% +161.4% 17726 ± 37% sched_debug.cpu.nr_switches.stddev
16.50 ± 22% +35.1% 22.29 ± 9% sched_debug.cpu.nr_uninterruptible.max
5906 ± 12% +37.3% 8110 ± 8% sched_debug.cpu.sched_count.avg
41231 +301.8% 165649 ± 45% sched_debug.cpu.sched_count.max
6649 ± 4% +162.1% 17430 ± 38% sched_debug.cpu.sched_count.stddev
2794 ± 14% +37.8% 3849 ± 11% sched_debug.cpu.sched_goidle.avg
19770 ± 5% +318.5% 82741 ± 45% sched_debug.cpu.sched_goidle.max
3237 ± 7% +168.2% 8684 ± 39% sched_debug.cpu.sched_goidle.stddev
2707 ± 14% +42.0% 3845 ± 10% sched_debug.cpu.ttwu_count.avg
24144 ± 4% +222.3% 77807 ± 43% sched_debug.cpu.ttwu_count.max
3710 ± 6% +149.1% 9243 ± 35% sched_debug.cpu.ttwu_count.stddev
890.11 ± 12% +63.4% 1454 ± 3% sched_debug.cpu.ttwu_local.avg
3405 ± 20% +65.9% 5649 ± 9% sched_debug.cpu.ttwu_local.max
599.71 ± 10% +47.1% 881.99 ± 12% sched_debug.cpu.ttwu_local.stddev
101950 ± 5% -22.2% 79276 ± 13% softirqs.CPU0.SCHED
76082 ± 8% +65.3% 125732 ± 20% softirqs.CPU10.TIMER
87879 ± 13% -33.2% 58674 ± 30% softirqs.CPU102.TIMER
84916 ± 12% -28.8% 60445 ± 28% softirqs.CPU103.TIMER
79117 ± 7% +60.4% 126925 ± 19% softirqs.CPU11.TIMER
80478 ± 7% +57.9% 127096 ± 19% softirqs.CPU12.TIMER
79602 ± 8% +58.9% 126500 ± 19% softirqs.CPU13.TIMER
76463 ± 9% +63.9% 125308 ± 21% softirqs.CPU14.TIMER
71283 ± 15% +76.9% 126120 ± 21% softirqs.CPU15.TIMER
75177 ± 10% +71.9% 129197 ± 19% softirqs.CPU16.TIMER
75260 ± 13% +69.9% 127848 ± 20% softirqs.CPU17.TIMER
78227 ± 14% +49.7% 117133 ± 28% softirqs.CPU18.TIMER
76725 ± 15% +60.8% 123342 ± 24% softirqs.CPU19.TIMER
14186 ± 12% -60.0% 5675 ± 37% softirqs.CPU2.SCHED
78243 ± 12% +57.5% 123220 ± 23% softirqs.CPU20.TIMER
74069 ± 13% +66.7% 123496 ± 24% softirqs.CPU21.TIMER
72445 ± 8% +71.5% 124276 ± 23% softirqs.CPU24.TIMER
71429 ± 7% +69.8% 121306 ± 23% softirqs.CPU25.TIMER
6838 ± 11% +291.4% 26765 ± 31% softirqs.CPU26.SCHED
84914 ± 11% -44.7% 46972 ± 35% softirqs.CPU26.TIMER
85536 ± 11% -37.9% 53112 ± 36% softirqs.CPU27.TIMER
92319 ± 11% -39.7% 55695 ± 41% softirqs.CPU29.TIMER
11818 ± 11% -50.4% 5865 ± 27% softirqs.CPU3.SCHED
91562 ± 6% -37.7% 57040 ± 41% softirqs.CPU30.TIMER
94268 ± 6% -43.9% 52887 ± 45% softirqs.CPU31.TIMER
93396 ± 4% -40.8% 55292 ± 45% softirqs.CPU32.TIMER
6065 ± 24% +65.3% 10023 ± 19% softirqs.CPU34.SCHED
5558 ± 18% +49.6% 8313 ± 19% softirqs.CPU35.SCHED
5181 ± 19% +91.7% 9930 ± 23% softirqs.CPU36.SCHED
10638 ± 5% -45.1% 5843 ± 17% softirqs.CPU4.SCHED
82385 ± 7% -30.8% 57037 ± 20% softirqs.CPU42.TIMER
85276 ± 10% -42.1% 49344 ± 47% softirqs.CPU45.TIMER
90182 ± 11% -38.8% 55156 ± 54% softirqs.CPU48.TIMER
9407 ± 11% -33.4% 6268 ± 36% softirqs.CPU5.SCHED
86739 ± 5% +50.2% 130246 ± 16% softirqs.CPU5.TIMER
90646 ± 10% -40.1% 54319 ± 46% softirqs.CPU51.TIMER
8726 ± 21% -54.2% 3998 ± 40% softirqs.CPU55.SCHED
77984 ± 6% +59.3% 124232 ± 21% softirqs.CPU55.TIMER
8399 ± 18% -53.5% 3905 ± 25% softirqs.CPU57.SCHED
8031 ± 17% -39.5% 4859 ± 35% softirqs.CPU58.SCHED
77083 ± 15% +56.7% 120805 ± 18% softirqs.CPU58.TIMER
8371 ± 17% -50.9% 4107 ± 37% softirqs.CPU59.SCHED
80820 ± 10% +54.3% 124710 ± 21% softirqs.CPU59.TIMER
84277 ± 10% +51.3% 127533 ± 11% softirqs.CPU6.TIMER
77675 ± 7% +57.8% 122547 ± 19% softirqs.CPU60.TIMER
79746 ± 7% +56.4% 124719 ± 19% softirqs.CPU61.TIMER
75401 ± 16% +64.4% 123975 ± 21% softirqs.CPU62.TIMER
80622 ± 9% +54.4% 124509 ± 20% softirqs.CPU63.TIMER
78498 ± 7% +57.4% 123570 ± 22% softirqs.CPU64.TIMER
78854 ± 11% +58.7% 125157 ± 19% softirqs.CPU65.TIMER
78398 ± 10% +56.5% 122679 ± 22% softirqs.CPU66.TIMER
70518 ± 14% +77.2% 124944 ± 21% softirqs.CPU67.TIMER
76676 ± 18% +65.8% 127094 ± 20% softirqs.CPU68.TIMER
79804 ± 13% +59.8% 127489 ± 19% softirqs.CPU69.TIMER
87607 ± 7% +47.0% 128769 ± 16% softirqs.CPU7.TIMER
72877 ± 11% +64.6% 119922 ± 24% softirqs.CPU74.TIMER
72901 ± 8% +69.5% 123554 ± 22% softirqs.CPU77.TIMER
89804 ± 9% -49.5% 45318 ± 45% softirqs.CPU78.TIMER
83519 ± 4% +44.1% 120383 ± 16% softirqs.CPU8.TIMER
88956 ± 14% -37.8% 55317 ± 39% softirqs.CPU81.TIMER
85839 ± 10% -40.4% 51145 ± 51% softirqs.CPU83.TIMER
5980 ± 25% +65.2% 9881 ± 13% softirqs.CPU86.SCHED
5232 ± 15% +68.0% 8790 ± 17% softirqs.CPU87.SCHED
79808 ± 6% +57.2% 125474 ± 20% softirqs.CPU9.TIMER
19.41 ± 4% -16.0 3.44 ± 9% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.calltrace.cycles-pp.secondary_startup_64
20.82 ± 2% -15.2 5.66 ± 19% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
20.82 ± 2% -15.2 5.66 ± 19% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
20.77 ± 2% -15.2 5.61 ± 19% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.81 ± 2% -15.2 5.66 ± 19% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.23 ± 2% +0.2 1.46 ± 2% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap.mmput
1.22 ± 2% +0.2 1.45 ± 2% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap
0.59 ± 5% +0.4 0.95 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep
1.39 +0.5 1.89 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.39 +0.5 1.89 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 +0.5 1.88 perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 +0.5 1.88 perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 +0.5 1.88 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.37 +0.5 1.87 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
1.37 +0.5 1.88 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.33 +0.5 1.84 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
1.34 +0.5 1.85 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.45 ± 2% +0.5 2.00 perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.93 ± 2% +0.6 1.50 ± 3% perf-profile.calltrace.cycles-pp.clear_huge_page
0.28 ±173% +1.0 1.25 ± 44% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.71 ± 2% +1.0 2.73 perf-profile.calltrace.cycles-pp.___might_sleep.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.70 ± 9% +1.1 1.81 ± 2% perf-profile.calltrace.cycles-pp.clear_page_erms
0.83 ± 7% +1.8 2.62 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
0.83 ± 7% +1.8 2.63 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
2.51 ± 2% +2.2 4.69 ± 4% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
2.52 ± 2% +2.2 4.71 ± 4% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
61.29 +8.6 69.94 perf-profile.calltrace.cycles-pp.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
69.75 +10.3 80.06 perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
72.75 +12.6 85.32 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.78 +12.6 85.35 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.64 +12.6 85.23 perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
72.84 +12.6 85.45 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
72.84 +12.6 85.45 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
72.84 +12.6 85.47 perf-profile.calltrace.cycles-pp.page_fault
19.73 ± 3% -16.2 3.58 ± 10% perf-profile.children.cycles-pp.intel_idle
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.children.cycles-pp.secondary_startup_64
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.children.cycles-pp.cpu_startup_entry
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.children.cycles-pp.do_idle
21.16 ± 2% -15.3 5.82 ± 17% perf-profile.children.cycles-pp.cpuidle_enter_state
20.82 ± 2% -15.2 5.66 ± 19% perf-profile.children.cycles-pp.start_secondary
0.37 ± 18% -0.2 0.18 ± 74% perf-profile.children.cycles-pp.start_kernel
0.10 ± 15% -0.1 0.04 ±107% perf-profile.children.cycles-pp.read
0.18 ± 6% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.native_irq_return_iret
0.07 ± 7% +0.0 0.08 perf-profile.children.cycles-pp.__list_del_entry_valid
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__lru_cache_add
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.07 ± 13% +0.0 0.09 ± 11% perf-profile.children.cycles-pp.cmd_stat
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.__run_perf_stat
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.process_interval
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.read_counters
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.__read_nocancel
0.06 ± 15% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.perf_event_read
0.06 ± 20% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.perf_read
0.06 ± 15% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.smp_call_function_single
0.03 ±100% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.09 ± 9% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.pte_alloc_one
0.00 +0.1 0.05 perf-profile.children.cycles-pp.___perf_sw_event
0.06 ± 20% +0.1 0.11 ± 25% perf-profile.children.cycles-pp.ktime_get
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.__perf_sw_event
0.01 ±173% +0.1 0.08 ± 11% perf-profile.children.cycles-pp.__perf_event_read_value
0.13 ± 6% +0.1 0.23 ± 5% perf-profile.children.cycles-pp.__put_compound_page
0.40 +0.1 0.50 perf-profile.children.cycles-pp.rcu_all_qs
0.11 ± 7% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.__page_cache_release
0.03 ±100% +0.1 0.17 ± 4% perf-profile.children.cycles-pp.free_transhuge_page
0.22 ± 6% +0.2 0.42 perf-profile.children.cycles-pp.free_one_page
1.23 ± 2% +0.2 1.46 ± 2% perf-profile.children.cycles-pp.tlb_flush_mmu_free
1.23 ± 2% +0.2 1.47 ± 2% perf-profile.children.cycles-pp.release_pages
0.06 ± 11% +0.3 0.33 ± 11% perf-profile.children.cycles-pp.deferred_split_huge_page
0.09 ± 4% +0.3 0.37 ± 9% perf-profile.children.cycles-pp.zap_huge_pmd
1.38 +0.5 1.88 perf-profile.children.cycles-pp.__wake_up_parent
1.38 +0.5 1.88 perf-profile.children.cycles-pp.do_group_exit
1.38 +0.5 1.88 perf-profile.children.cycles-pp.do_exit
1.37 +0.5 1.88 perf-profile.children.cycles-pp.mmput
1.37 +0.5 1.88 perf-profile.children.cycles-pp.exit_mmap
1.33 ± 2% +0.5 1.84 perf-profile.children.cycles-pp.unmap_page_range
1.34 +0.5 1.85 perf-profile.children.cycles-pp.unmap_vmas
1.75 +0.6 2.37 perf-profile.children.cycles-pp._cond_resched
0.54 ± 63% +0.7 1.25 ± 44% perf-profile.children.cycles-pp.poll_idle
2.31 ± 3% +1.4 3.69 perf-profile.children.cycles-pp.___might_sleep
2.61 ± 2% +2.2 4.83 ± 4% perf-profile.children.cycles-pp.get_page_from_freelist
2.62 ± 2% +2.2 4.85 ± 4% perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.06 ± 6% +2.3 3.36 ± 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.69 ± 11% +2.5 4.17 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
62.42 +9.8 72.18 perf-profile.children.cycles-pp.clear_page_erms
70.69 +10.9 81.56 perf-profile.children.cycles-pp.clear_huge_page
72.77 +12.6 85.33 perf-profile.children.cycles-pp.__handle_mm_fault
72.79 +12.6 85.36 perf-profile.children.cycles-pp.handle_mm_fault
72.64 +12.6 85.23 perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
72.86 +12.6 85.47 perf-profile.children.cycles-pp.__do_page_fault
72.86 +12.6 85.47 perf-profile.children.cycles-pp.do_page_fault
72.86 +12.6 85.48 perf-profile.children.cycles-pp.page_fault
19.73 ± 3% -16.2 3.58 ± 10% perf-profile.self.cycles-pp.intel_idle
0.78 ± 3% -0.2 0.60 ± 6% perf-profile.self.cycles-pp.__free_pages_ok
0.18 ± 6% -0.0 0.14 ± 5% perf-profile.self.cycles-pp.native_irq_return_iret
0.07 ± 7% +0.0 0.08 perf-profile.self.cycles-pp.__list_del_entry_valid
0.06 ± 15% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.smp_call_function_single
0.03 ±102% +0.1 0.10 ± 29% perf-profile.self.cycles-pp.ktime_get
0.39 +0.1 0.49 perf-profile.self.cycles-pp.rcu_all_qs
1.62 +0.3 1.97 ± 2% perf-profile.self.cycles-pp.get_page_from_freelist
1.50 +0.6 2.08 perf-profile.self.cycles-pp._cond_resched
6.08 +0.7 6.78 ± 2% perf-profile.self.cycles-pp.clear_huge_page
0.54 ± 64% +0.7 1.25 ± 44% perf-profile.self.cycles-pp.poll_idle
2.29 ± 3% +1.4 3.66 perf-profile.self.cycles-pp.___might_sleep
1.69 ± 11% +2.5 4.17 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
61.81 +9.9 71.69 perf-profile.self.cycles-pp.clear_page_erms
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
array_size/compiler/cpufreq_governor/kconfig/nr_threads/omp/rootfs/tbox_group/testcase:
50000000/gcc-7/performance/x86_64-rhel-7.2/50%/true/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/stream
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
70867 ± 16% -28.8% 50456 ± 6% stream.add_bandwidth_MBps
11358 ± 9% +13.4% 12879 stream.time.minor_page_faults
263.61 ± 16% +36.1% 358.87 stream.time.user_time
9477 ± 14% -22.8% 7312 ± 6% stream.time.voluntary_context_switches
34.65 -2.3% 33.85 ± 2% boot-time.boot
14973 ± 5% -7.9% 13783 ± 6% softirqs.CPU6.TIMER
67.50 -3.7% 65.00 vmstat.cpu.id
31.50 ± 2% +7.9% 34.00 ± 3% vmstat.cpu.us
3619 ± 15% -29.4% 2555 ± 7% vmstat.system.cs
2863126 ± 6% -15.5% 2419307 ± 2% cpuidle.C3.time
8360 ± 6% -15.7% 7050 ± 2% cpuidle.C3.usage
4.188e+08 ± 14% +32.5% 5.548e+08 ± 2% cpuidle.C6.time
428117 ± 14% +31.9% 564534 ± 2% cpuidle.C6.usage
3694 ± 23% +130.5% 8515 ± 73% proc-vmstat.numa_hint_faults
98740 ± 2% +8.8% 107468 ± 3% proc-vmstat.numa_hit
81603 ± 2% +10.7% 90367 ± 4% proc-vmstat.numa_local
48999 ± 5% +25.0% 61261 ± 11% proc-vmstat.pgfault
7862 ± 6% -14.0% 6758 ± 2% turbostat.C3
0.38 ± 17% -0.1 0.24 ± 4% turbostat.C3%
428197 ± 14% +32.2% 566126 ± 2% turbostat.C6
19.93 ± 28% +53.7% 30.64 ± 5% turbostat.CPU%c6
960324 ± 12% +23.7% 1187618 turbostat.IRQ
2392 ± 3% -26.4% 1760 ± 8% slabinfo.eventpoll_epi.active_objs
2392 ± 3% -26.4% 1760 ± 8% slabinfo.eventpoll_epi.num_objs
4186 ± 3% -26.4% 3080 ± 8% slabinfo.eventpoll_pwq.active_objs
4186 ± 3% -26.4% 3080 ± 8% slabinfo.eventpoll_pwq.num_objs
650.00 ± 7% -13.2% 564.50 ± 3% slabinfo.file_lock_cache.active_objs
650.00 ± 7% -13.2% 564.50 ± 3% slabinfo.file_lock_cache.num_objs
28794 ± 14% -24.0% 21889 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
4667 ± 33% -60.6% 1837 ± 65% sched_debug.cfs_rq:/.min_vruntime.min
3081 ± 24% -73.8% 808.08 ±142% sched_debug.cfs_rq:/.spread0.avg
-2493 +70.7% -4255 sched_debug.cfs_rq:/.spread0.min
295.31 ± 3% -16.9% 245.52 ± 11% sched_debug.cfs_rq:/.util_avg.stddev
0.00 ± 8% +19.3% 0.00 ± 7% sched_debug.cpu.next_balance.stddev
240.50 ± 4% -14.9% 204.75 ± 11% sched_debug.cpu.nr_switches.min
2.62 ± 3% +32.0% 3.46 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
3.769e+10 ± 5% -7.8% 3.476e+10 ± 4% perf-stat.branch-instructions
9.742e+09 +6.0% 1.032e+10 ± 4% perf-stat.cache-references
28360 ± 6% -14.5% 24239 ± 8% perf-stat.context-switches
4.31 ± 20% +40.5% 6.06 perf-stat.cpi
8.342e+11 ± 17% +34.0% 1.118e+12 ± 4% perf-stat.cpu-cycles
451.50 ± 10% -27.8% 326.00 ± 8% perf-stat.cpu-migrations
0.02 ± 19% +0.0 0.03 ± 8% perf-stat.dTLB-load-miss-rate%
9450196 ± 21% +41.5% 13368418 ± 4% perf-stat.dTLB-load-misses
0.24 ± 17% -31.3% 0.17 perf-stat.ipc
47611 ± 7% +22.9% 58520 ± 10% perf-stat.minor-faults
47365 ± 6% +23.6% 58532 ± 10% perf-stat.page-faults
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +6.6 6.55 ± 54% perf-profile.calltrace.cycles-pp.kstat_irqs_cpu.show_interrupts.seq_read.proc_reg_read.__vfs_read
1.19 ±173% +6.7 7.84 ± 43% perf-profile.calltrace.cycles-pp.read
1.19 ±173% +10.0 11.20 ± 56% perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.19 ±173% +10.0 11.20 ± 56% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.78 ±173% +10.2 11.99 ± 52% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.__vfs_read.vfs_read
2.38 ±173% +14.5 16.87 ± 38% perf-profile.calltrace.cycles-pp.proc_reg_read.__vfs_read.vfs_read.sys_read.do_syscall_64
2.38 ±173% +14.5 16.87 ± 38% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.__vfs_read.vfs_read.sys_read
2.38 ±173% +15.6 17.96 ± 38% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +6.5 6.55 ± 54% perf-profile.children.cycles-pp.kstat_irqs_cpu
1.19 ±173% +6.7 7.84 ± 43% perf-profile.children.cycles-pp.read
1.78 ±173% +10.2 11.98 ± 52% perf-profile.children.cycles-pp.show_interrupts
2.38 ±173% +14.5 16.87 ± 38% perf-profile.children.cycles-pp.proc_reg_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.sys_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.vfs_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.__vfs_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.seq_read
17.64 ± 77% +28.9 46.59 ± 26% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
17.64 ± 77% +28.9 46.59 ± 26% perf-profile.children.cycles-pp.do_syscall_64
8103 ± 15% +30.9% 10608 interrupts.CPU0.LOC:Local_timer_interrupts
8095 ± 15% +30.5% 10561 interrupts.CPU1.LOC:Local_timer_interrupts
8078 ± 15% +31.3% 10610 interrupts.CPU10.LOC:Local_timer_interrupts
8026 ± 14% +32.0% 10598 interrupts.CPU11.LOC:Local_timer_interrupts
8103 ± 15% +30.9% 10603 interrupts.CPU12.LOC:Local_timer_interrupts
8070 ± 15% +31.3% 10593 interrupts.CPU13.LOC:Local_timer_interrupts
8068 ± 15% +32.4% 10678 ± 2% interrupts.CPU14.LOC:Local_timer_interrupts
8067 ± 15% +31.5% 10609 interrupts.CPU15.LOC:Local_timer_interrupts
8103 ± 15% +30.7% 10588 interrupts.CPU16.LOC:Local_timer_interrupts
8192 ± 16% +29.3% 10596 interrupts.CPU17.LOC:Local_timer_interrupts
8099 ± 15% +31.2% 10628 interrupts.CPU18.LOC:Local_timer_interrupts
8061 ± 15% +31.6% 10604 interrupts.CPU19.LOC:Local_timer_interrupts
8114 ± 15% +30.8% 10610 interrupts.CPU2.LOC:Local_timer_interrupts
8086 ± 15% +31.2% 10610 interrupts.CPU20.LOC:Local_timer_interrupts
8088 ± 15% +31.4% 10629 interrupts.CPU21.LOC:Local_timer_interrupts
8064 ± 16% +31.3% 10591 interrupts.CPU22.LOC:Local_timer_interrupts
8069 ± 15% +30.5% 10530 interrupts.CPU23.LOC:Local_timer_interrupts
8079 ± 15% +31.3% 10611 interrupts.CPU24.LOC:Local_timer_interrupts
8089 ± 15% +31.0% 10598 interrupts.CPU25.LOC:Local_timer_interrupts
8114 ± 15% +30.6% 10597 interrupts.CPU26.LOC:Local_timer_interrupts
8093 ± 15% +31.3% 10630 interrupts.CPU27.LOC:Local_timer_interrupts
8092 ± 15% +30.9% 10589 interrupts.CPU28.LOC:Local_timer_interrupts
8084 ± 15% +31.1% 10599 interrupts.CPU29.LOC:Local_timer_interrupts
8083 ± 15% +30.5% 10547 interrupts.CPU3.LOC:Local_timer_interrupts
8096 ± 15% +31.2% 10624 interrupts.CPU30.LOC:Local_timer_interrupts
8154 ± 15% +31.0% 10680 ± 2% interrupts.CPU31.LOC:Local_timer_interrupts
8129 ± 15% +30.7% 10628 interrupts.CPU32.LOC:Local_timer_interrupts
8096 ± 15% +31.9% 10676 ± 2% interrupts.CPU33.LOC:Local_timer_interrupts
8119 ± 15% +30.8% 10620 interrupts.CPU34.LOC:Local_timer_interrupts
8085 ± 15% +31.2% 10612 interrupts.CPU35.LOC:Local_timer_interrupts
8083 ± 15% +31.2% 10602 interrupts.CPU36.LOC:Local_timer_interrupts
819.25 ± 35% +162.8% 2153 ± 47% interrupts.CPU37.CAL:Function_call_interrupts
8100 ± 15% +31.5% 10655 interrupts.CPU37.LOC:Local_timer_interrupts
32.75 ±173% +4387.0% 1469 ± 71% interrupts.CPU37.TLB:TLB_shootdowns
8085 ± 15% +31.2% 10612 interrupts.CPU38.LOC:Local_timer_interrupts
985.25 ± 24% +124.5% 2211 ± 52% interrupts.CPU39.CAL:Function_call_interrupts
8154 ± 15% +30.3% 10625 interrupts.CPU39.LOC:Local_timer_interrupts
162.75 ±110% +852.4% 1550 ± 74% interrupts.CPU39.TLB:TLB_shootdowns
8171 ± 14% +29.8% 10603 interrupts.CPU4.LOC:Local_timer_interrupts
8088 ± 15% +31.2% 10609 interrupts.CPU40.LOC:Local_timer_interrupts
8091 ± 15% +31.6% 10652 ± 2% interrupts.CPU41.LOC:Local_timer_interrupts
8097 ± 15% +30.9% 10601 interrupts.CPU42.LOC:Local_timer_interrupts
8105 ± 15% +30.7% 10595 ± 2% interrupts.CPU43.LOC:Local_timer_interrupts
8069 ± 15% +31.3% 10593 interrupts.CPU44.LOC:Local_timer_interrupts
8075 ± 15% +31.7% 10631 interrupts.CPU45.LOC:Local_timer_interrupts
8070 ± 15% +31.3% 10596 interrupts.CPU46.LOC:Local_timer_interrupts
8093 ± 15% +31.2% 10617 interrupts.CPU47.LOC:Local_timer_interrupts
8085 ± 15% +30.9% 10586 interrupts.CPU48.LOC:Local_timer_interrupts
8093 ± 15% +31.8% 10668 interrupts.CPU49.LOC:Local_timer_interrupts
2611 ± 26% -41.5% 1526 ± 15% interrupts.CPU5.CAL:Function_call_interrupts
8078 ± 15% +31.1% 10592 interrupts.CPU5.LOC:Local_timer_interrupts
1801 ± 23% -51.2% 878.50 ± 27% interrupts.CPU5.TLB:TLB_shootdowns
8082 ± 15% +31.1% 10596 interrupts.CPU50.LOC:Local_timer_interrupts
8093 ± 15% +31.3% 10628 interrupts.CPU51.LOC:Local_timer_interrupts
8075 ± 15% +31.3% 10600 interrupts.CPU52.LOC:Local_timer_interrupts
8057 ± 15% +31.6% 10602 interrupts.CPU53.LOC:Local_timer_interrupts
8079 ± 15% +30.8% 10566 interrupts.CPU54.LOC:Local_timer_interrupts
8068 ± 15% +31.3% 10596 interrupts.CPU55.LOC:Local_timer_interrupts
8085 ± 15% +31.0% 10593 interrupts.CPU56.LOC:Local_timer_interrupts
8059 ± 15% +31.6% 10605 interrupts.CPU57.LOC:Local_timer_interrupts
8069 ± 15% +31.3% 10599 interrupts.CPU58.LOC:Local_timer_interrupts
8065 ± 15% +31.5% 10608 interrupts.CPU59.LOC:Local_timer_interrupts
8077 ± 15% +31.2% 10600 interrupts.CPU6.LOC:Local_timer_interrupts
8079 ± 15% +31.3% 10607 interrupts.CPU60.LOC:Local_timer_interrupts
8155 ± 15% +30.3% 10628 interrupts.CPU61.LOC:Local_timer_interrupts
8090 ± 15% +31.4% 10633 interrupts.CPU62.LOC:Local_timer_interrupts
8127 ± 16% +30.5% 10603 interrupts.CPU63.LOC:Local_timer_interrupts
8091 ± 15% +31.8% 10664 interrupts.CPU64.LOC:Local_timer_interrupts
8090 ± 16% +31.1% 10604 interrupts.CPU65.LOC:Local_timer_interrupts
8078 ± 15% +32.6% 10714 interrupts.CPU66.LOC:Local_timer_interrupts
8090 ± 15% +31.2% 10611 interrupts.CPU67.LOC:Local_timer_interrupts
8087 ± 15% +31.1% 10606 interrupts.CPU68.LOC:Local_timer_interrupts
8059 ± 15% +31.4% 10588 interrupts.CPU69.LOC:Local_timer_interrupts
7999 ± 16% +32.6% 10610 interrupts.CPU7.LOC:Local_timer_interrupts
8069 ± 15% +31.7% 10625 interrupts.CPU70.LOC:Local_timer_interrupts
8074 ± 15% +31.1% 10586 interrupts.CPU71.LOC:Local_timer_interrupts
8076 ± 15% +31.3% 10602 interrupts.CPU72.LOC:Local_timer_interrupts
8112 ± 16% +30.9% 10618 interrupts.CPU73.LOC:Local_timer_interrupts
8075 ± 15% +31.7% 10635 interrupts.CPU74.LOC:Local_timer_interrupts
8075 ± 15% +31.2% 10595 interrupts.CPU75.LOC:Local_timer_interrupts
8077 ± 15% +31.2% 10598 interrupts.CPU76.LOC:Local_timer_interrupts
8191 ± 14% +30.4% 10682 interrupts.CPU77.LOC:Local_timer_interrupts
8025 ± 16% +32.2% 10609 interrupts.CPU78.LOC:Local_timer_interrupts
8086 ± 15% +31.3% 10615 interrupts.CPU79.LOC:Local_timer_interrupts
8072 ± 15% +31.3% 10600 interrupts.CPU8.LOC:Local_timer_interrupts
3.00 ±137% +1808.3% 57.25 ± 87% interrupts.CPU8.RES:Rescheduling_interrupts
8085 ± 15% +32.1% 10681 interrupts.CPU80.LOC:Local_timer_interrupts
8088 ± 15% +31.2% 10608 interrupts.CPU81.LOC:Local_timer_interrupts
8082 ± 15% +31.5% 10625 interrupts.CPU82.LOC:Local_timer_interrupts
8092 ± 15% +31.2% 10620 interrupts.CPU83.LOC:Local_timer_interrupts
8110 ± 15% +30.7% 10600 interrupts.CPU84.LOC:Local_timer_interrupts
8082 ± 15% +31.4% 10623 interrupts.CPU85.LOC:Local_timer_interrupts
8088 ± 15% +31.0% 10597 interrupts.CPU86.LOC:Local_timer_interrupts
8081 ± 15% +31.2% 10600 interrupts.CPU87.LOC:Local_timer_interrupts
8082 ± 15% +31.1% 10598 interrupts.CPU9.LOC:Local_timer_interrupts
207.25 ±170% +301.3% 831.75 ± 52% interrupts.CPU9.TLB:TLB_shootdowns
711846 ± 15% +31.2% 933907 interrupts.LOC:Local_timer_interrupts
5731 ± 21% +26.1% 7229 ± 16% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
array_size/compiler/cpufreq_governor/kconfig/nr_threads/omp/rootfs/tbox_group/testcase:
10000000/gcc-7/performance/x86_64-rhel-7.2/50%/true/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/stream
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
:4 25% 1:4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
68287 -30.6% 47413 ± 3% stream.add_bandwidth_MBps
74377 ± 3% -38.8% 45504 ± 3% stream.copy_bandwidth_MBps
71751 -39.8% 43172 ± 4% stream.scale_bandwidth_MBps
51.82 +51.5% 78.49 ± 3% stream.time.user_time
221.50 ± 45% +431.6% 1177 ± 35% stream.time.voluntary_context_switches
74941 -34.4% 49156 ± 3% stream.triad_bandwidth_MBps
1792 ± 9% +28.3% 2299 ± 14% numa-meminfo.node0.PageTables
447.25 ± 9% +31.8% 589.50 ± 15% numa-vmstat.node0.nr_page_table_pages
983.50 +4.8% 1031 ± 2% proc-vmstat.nr_page_table_pages
18576 ± 24% -29.5% 13093 ± 19% softirqs.CPU44.TIMER
6562 ± 14% -31.8% 4477 ± 44% turbostat.C1
35.21 ± 11% -31.6% 24.09 ± 9% turbostat.CPU%c1
2376 ± 12% -27.3% 1728 ± 10% slabinfo.eventpoll_epi.active_objs
2376 ± 12% -27.3% 1728 ± 10% slabinfo.eventpoll_epi.num_objs
4158 ± 12% -27.3% 3024 ± 10% slabinfo.eventpoll_pwq.active_objs
4158 ± 12% -27.3% 3024 ± 10% slabinfo.eventpoll_pwq.num_objs
3904 ± 5% -9.2% 3544 ± 3% slabinfo.skbuff_head_cache.active_objs
1475 ± 4% -13.1% 1282 ± 10% slabinfo.task_group.active_objs
1475 ± 4% -13.1% 1282 ± 10% slabinfo.task_group.num_objs
40.50 ± 50% +210.5% 125.75 ± 52% interrupts.CPU1.RES:Rescheduling_interrupts
1.00 +7075.0% 71.75 ±141% interrupts.CPU22.RES:Rescheduling_interrupts
150.25 ± 12% -57.4% 64.00 ±100% interrupts.CPU27.TLB:TLB_shootdowns
166.50 ± 19% -61.6% 64.00 ±100% interrupts.CPU28.TLB:TLB_shootdowns
160.50 ± 21% -60.1% 64.00 ±100% interrupts.CPU40.TLB:TLB_shootdowns
174.25 ± 19% -62.7% 65.00 ±100% interrupts.CPU74.TLB:TLB_shootdowns
149.25 ± 5% -57.1% 64.00 ±100% interrupts.CPU82.TLB:TLB_shootdowns
7124 ± 14% -29.4% 5030 ± 3% interrupts.TLB:TLB_shootdowns
4984 ± 45% -78.7% 1060 ± 46% sched_debug.cfs_rq:/.min_vruntime.min
25.89 ± 57% -66.2% 8.74 ±111% sched_debug.cfs_rq:/.removed.load_avg.avg
152.68 ± 29% -57.6% 64.71 ±103% sched_debug.cfs_rq:/.removed.load_avg.stddev
1199 ± 56% -66.4% 402.43 ±111% sched_debug.cfs_rq:/.removed.runnable_sum.avg
7076 ± 29% -57.9% 2981 ±103% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
2404 ± 88% -368.5% -6456 sched_debug.cfs_rq:/.spread0.avg
20903 ± 15% -52.0% 10029 ± 29% sched_debug.cfs_rq:/.spread0.max
-2008 +507.1% -12193 sched_debug.cfs_rq:/.spread0.min
225.00 ± 8% -16.7% 187.50 ± 15% sched_debug.cpu.nr_switches.min
1.356e+10 ± 9% -28.4% 9.705e+09 ± 17% perf-stat.branch-instructions
3.35 ± 4% +69.5% 5.67 ± 9% perf-stat.cpi
1694005 ± 15% -32.0% 1152591 ± 21% perf-stat.iTLB-loads
6.601e+10 ± 9% -24.9% 4.959e+10 ± 17% perf-stat.instructions
0.30 ± 4% -40.5% 0.18 ± 10% perf-stat.ipc
39.73 ± 4% -36.0 3.70 ± 35% perf-stat.node-load-miss-rate%
18391383 ± 6% -90.9% 1670191 ± 34% perf-stat.node-load-misses
27864581 ± 2% +56.6% 43648277 ± 3% perf-stat.node-loads
40.45 ± 4% -38.9 1.51 ± 85% perf-stat.node-store-miss-rate%
44193260 ± 5% -96.7% 1470898 ± 85% perf-stat.node-store-misses
65038165 ± 3% +48.9% 96838822 ± 4% perf-stat.node-stores
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__ioctl.perf_evlist__disable.cmd_record.run_builtin
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__ioctl.perf_evlist__disable.cmd_record
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.__ioctl.perf_evlist__disable.cmd_record.run_builtin.main
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__ioctl.perf_evlist__disable
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.perf_evlist__disable.cmd_record.run_builtin.main.generic_start_main
4.37 ±111% -4.4 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.show_interrupts.seq_read.proc_reg_read.__vfs_read
4.37 ±111% -4.4 0.00 perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.show_interrupts.seq_read.proc_reg_read
4.33 ±109% -4.3 0.00 perf-profile.calltrace.cycles-pp.seq_printf.show_interrupts.seq_read.proc_reg_read.__vfs_read
4.33 ±109% -4.3 0.00 perf-profile.calltrace.cycles-pp.seq_vprintf.seq_printf.show_interrupts.seq_read.proc_reg_read
4.33 ±109% -4.3 0.00 perf-profile.calltrace.cycles-pp.vsnprintf.seq_vprintf.seq_printf.show_interrupts.seq_read
13.76 ± 91% -3.8 10.00 ±173% perf-profile.calltrace.cycles-pp.proc_reg_read.__vfs_read.vfs_read.sys_read.do_syscall_64
13.76 ± 91% -3.8 10.00 ±173% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.__vfs_read.vfs_read.sys_read
12.31 ± 90% -2.3 10.00 ±173% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.__vfs_read.vfs_read
3.51 ±103% -1.2 2.31 ±173% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
3.51 ±103% -1.2 2.31 ±173% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.27 ±104% +2.7 10.00 ±173% perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.27 ±104% +2.7 10.00 ±173% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.78 ±105% -5.8 0.00 perf-profile.children.cycles-pp.seq_printf
5.78 ±105% -5.8 0.00 perf-profile.children.cycles-pp.seq_vprintf
5.78 ±105% -5.8 0.00 perf-profile.children.cycles-pp.vsnprintf
7.68 ±133% -4.9 2.78 ±173% perf-profile.children.cycles-pp.perf_evlist__disable
14.46 ± 83% -4.5 10.00 ±173% perf-profile.children.cycles-pp.seq_read
4.37 ±111% -4.4 0.00 perf-profile.children.cycles-pp.__mutex_lock
4.37 ±111% -4.4 0.00 perf-profile.children.cycles-pp.mutex_spin_on_owner
13.76 ± 91% -3.8 10.00 ±173% perf-profile.children.cycles-pp.proc_reg_read
12.31 ± 90% -2.3 10.00 ±173% perf-profile.children.cycles-pp.show_interrupts
3.51 ±103% -1.2 2.31 ±173% perf-profile.children.cycles-pp.do_filp_open
3.51 ±103% -1.2 2.31 ±173% perf-profile.children.cycles-pp.path_openat
4.37 ±111% -4.4 0.00 perf-profile.self.cycles-pp.mutex_spin_on_owner
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/testcase:
gcc-7/performance/x86_64-rhel-7.2/50%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/pft
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_apic_timer_interrupt/0x
%stddev %change %stddev
\ | \
230747 -42.5% 132607 pft.faults_per_sec_per_cpu
7861494 -33.0% 5265420 pft.time.minor_page_faults
4057 +17.5% 4767 pft.time.percent_of_cpu_this_job_got
11947 +16.2% 13877 pft.time.system_time
249.55 +86.6% 465.61 ± 7% pft.time.user_time
143006 +47.0% 210176 ± 4% pft.time.voluntary_context_switches
6501752 ± 9% -40.5% 3870149 ± 34% numa-numastat.node0.local_node
6515543 ± 9% -40.4% 3881040 ± 34% numa-numastat.node0.numa_hit
59.50 -10.1% 53.50 vmstat.cpu.id
39.75 ± 2% +20.1% 47.75 vmstat.procs.r
5101 ± 2% +31.8% 6723 ± 9% vmstat.system.cs
59.84 -6.1 53.74 mpstat.cpu.idle%
0.00 ± 61% +0.0 0.01 ± 33% mpstat.cpu.soft%
39.14 +5.4 44.54 mpstat.cpu.sys%
0.99 +0.7 1.67 ± 7% mpstat.cpu.usr%
32208 ± 14% +37.3% 44213 ± 4% numa-meminfo.node0.SReclaimable
411790 ± 16% +161.6% 1077435 ± 17% numa-meminfo.node1.AnonHugePages
3769 ± 13% +29.0% 4860 ± 9% numa-meminfo.node1.PageTables
41429 ± 11% -31.3% 28458 ± 7% numa-meminfo.node1.SReclaimable
45429 ± 49% -59.8% 18251 ±108% numa-meminfo.node1.Shmem
14247 -16.1% 11952 ± 4% slabinfo.files_cache.active_objs
14326 -15.5% 12108 ± 4% slabinfo.files_cache.num_objs
5797 -18.6% 4718 ± 8% slabinfo.mm_struct.active_objs
5863 -18.2% 4797 ± 8% slabinfo.mm_struct.num_objs
1193 ± 3% -10.7% 1066 ± 2% slabinfo.nsproxy.active_objs
1193 ± 3% -10.7% 1066 ± 2% slabinfo.nsproxy.num_objs
6180787 ± 3% -15.2% 5242821 ± 14% cpuidle.C1.time
342482 ± 4% +37.0% 469078 ± 7% cpuidle.C1.usage
2.939e+08 ± 3% +31.0% 3.85e+08 ± 3% cpuidle.C1E.time
848558 ± 6% +30.9% 1110683 ± 2% cpuidle.C1E.usage
1.796e+10 -10.1% 1.615e+10 ± 2% cpuidle.C6.time
18678065 -10.9% 16650357 cpuidle.C6.usage
1.116e+08 ± 11% +128.8% 2.554e+08 ± 10% cpuidle.POLL.time
6702 ± 3% +38.5% 9283 cpuidle.POLL.usage
5173027 ± 2% +19.2% 6166000 ± 2% meminfo.Active
5128144 ± 2% +19.4% 6121960 ± 2% meminfo.Active(anon)
733868 +130.3% 1690255 ± 33% meminfo.AnonHugePages
4988775 ± 2% +19.2% 5948484 ± 3% meminfo.AnonPages
5.318e+08 +18.8% 6.319e+08 meminfo.Committed_AS
1082 ± 84% +87.0% 2024 ± 9% meminfo.Mlocked
7070 +26.5% 8941 ± 12% meminfo.PageTables
1083 ± 84% +87.1% 2027 ± 9% meminfo.Unevictable
8051 ± 14% +37.3% 11053 ± 4% numa-vmstat.node0.nr_slab_reclaimable
3703595 ± 7% -43.4% 2095655 ± 15% numa-vmstat.node0.numa_hit
3689371 ± 7% -43.5% 2084541 ± 15% numa-vmstat.node0.numa_local
191.75 ± 16% +164.5% 507.25 ± 34% numa-vmstat.node1.nr_anon_transparent_hugepages
11353 ± 49% -59.8% 4565 ±108% numa-vmstat.node1.nr_shmem
10358 ± 11% -31.3% 7114 ± 7% numa-vmstat.node1.nr_slab_reclaimable
4161642 ± 6% -12.2% 3653839 ± 9% numa-vmstat.node1.numa_hit
4020032 ± 6% -12.7% 3508992 ± 9% numa-vmstat.node1.numa_local
1177 +14.0% 1341 turbostat.Avg_MHz
42.29 +5.9 48.23 turbostat.Busy%
340371 ± 4% +37.3% 467253 ± 7% turbostat.C1
842290 ± 6% +31.3% 1105626 ± 2% turbostat.C1E
0.93 ± 3% +0.3 1.21 ± 4% turbostat.C1E%
18676469 -10.9% 16648172 turbostat.C6
56.92 -6.2 50.68 turbostat.C6%
56.18 -13.0% 48.89 ± 3% turbostat.CPU%c1
255.16 -8.8% 232.68 turbostat.PkgWatt
65.13 -17.8% 53.51 turbostat.RAMWatt
25420 ± 11% -32.7% 17115 ± 18% softirqs.CPU1.SCHED
16168 ± 10% -56.0% 7113 ± 22% softirqs.CPU2.SCHED
91499 ± 10% +18.0% 107985 ± 15% softirqs.CPU29.TIMER
13561 ± 16% -51.4% 6584 ± 16% softirqs.CPU3.SCHED
84774 ± 6% +27.2% 107811 ± 18% softirqs.CPU30.TIMER
82435 ± 8% +24.5% 102670 ± 21% softirqs.CPU31.TIMER
77710 ± 13% +35.6% 105402 ± 20% softirqs.CPU34.TIMER
11043 ± 8% -41.2% 6493 ± 19% softirqs.CPU4.SCHED
87767 ± 4% +22.1% 107187 ± 15% softirqs.CPU48.TIMER
10229 ± 10% -40.4% 6098 ± 21% softirqs.CPU5.SCHED
86042 ± 4% +22.1% 105097 ± 17% softirqs.CPU80.TIMER
86912 ± 5% +23.7% 107494 ± 16% softirqs.CPU81.TIMER
82985 ± 10% +32.6% 110071 ± 13% softirqs.CPU87.TIMER
79387 ± 12% +40.0% 111162 ± 13% softirqs.CPU90.TIMER
82466 ± 13% +39.0% 114662 ± 11% softirqs.CPU93.TIMER
6797 ± 31% -45.9% 3674 ± 34% softirqs.CPU99.SCHED
355628 ± 2% +14.3% 406464 ± 6% softirqs.RCU
8446116 +9.7% 9267049 softirqs.TIMER
1262046 +22.3% 1542932 proc-vmstat.nr_active_anon
1228899 +22.1% 1500715 proc-vmstat.nr_anon_pages
356.25 ± 9% +100.1% 712.75 ± 3% proc-vmstat.nr_anon_transparent_hugepages
1473503 -1.8% 1446646 proc-vmstat.nr_dirty_background_threshold
2950611 -1.8% 2896831 proc-vmstat.nr_dirty_threshold
14627049 -1.8% 14358270 proc-vmstat.nr_free_pages
269.75 ± 84% +87.8% 506.50 ± 8% proc-vmstat.nr_mlock
1758 +21.4% 2136 ± 2% proc-vmstat.nr_page_table_pages
18410 -1.3% 18167 proc-vmstat.nr_slab_reclaimable
52708 -2.6% 51354 proc-vmstat.nr_slab_unreclaimable
270.00 ± 84% +87.8% 507.00 ± 8% proc-vmstat.nr_unevictable
1262043 +22.3% 1542927 proc-vmstat.nr_zone_active_anon
270.00 ± 84% +87.8% 507.00 ± 8% proc-vmstat.nr_zone_unevictable
13918060 -31.1% 9583409 proc-vmstat.numa_hit
13897019 -31.2% 9562209 proc-vmstat.numa_local
2.743e+09 -32.9% 1.841e+09 proc-vmstat.pgalloc_normal
8687624 -30.0% 6084756 proc-vmstat.pgfault
2.742e+09 -32.9% 1.841e+09 proc-vmstat.pgfree
5339231 -32.9% 3584118 proc-vmstat.thp_deferred_split_page
5339481 -32.9% 3584076 proc-vmstat.thp_fault_alloc
4.798e+11 +20.9% 5.802e+11 ± 7% perf-stat.branch-instructions
0.45 ± 4% -0.1 0.37 ± 3% perf-stat.branch-miss-rate%
73.80 -8.9 64.94 perf-stat.cache-miss-rate%
1.921e+10 -30.4% 1.337e+10 ± 2% perf-stat.cache-misses
2.603e+10 -20.9% 2.059e+10 ± 2% perf-stat.cache-references
1527973 ± 2% +34.0% 2047421 ± 9% perf-stat.context-switches
3.625e+13 +15.2% 4.178e+13 perf-stat.cpu-cycles
11029 -14.8% 9402 ± 5% perf-stat.cpu-migrations
5.477e+11 +18.3% 6.481e+11 ± 5% perf-stat.dTLB-loads
0.08 -0.0 0.05 ± 8% perf-stat.dTLB-store-miss-rate%
1.202e+08 -48.5% 61867822 ± 4% perf-stat.dTLB-store-misses
1.521e+11 -11.0% 1.354e+11 ± 4% perf-stat.dTLB-stores
22.62 +10.0 32.67 ± 10% perf-stat.iTLB-load-miss-rate%
63435492 +20.8% 76661143 ± 8% perf-stat.iTLB-load-misses
2.17e+08 -26.9% 1.586e+08 ± 7% perf-stat.iTLB-loads
1.91e+12 +16.2% 2.219e+12 ± 4% perf-stat.instructions
8642807 -30.0% 6049660 perf-stat.minor-faults
8.59 ± 2% -0.7 7.90 ± 6% perf-stat.node-load-miss-rate%
1.413e+08 ± 3% -34.4% 92653965 ± 6% perf-stat.node-load-misses
1.504e+09 -28.1% 1.081e+09 perf-stat.node-loads
1.152e+10 -27.4% 8.37e+09 perf-stat.node-stores
8642829 -30.0% 6049673 perf-stat.page-faults
611362 ± 11% +11.6% 682342 interrupts.CAL:Function_call_interrupts
34040 ± 11% -33.7% 22553 ± 16% interrupts.CPU1.RES:Rescheduling_interrupts
94.00 ±115% -82.7% 16.25 ± 84% interrupts.CPU103.RES:Rescheduling_interrupts
5539 ± 10% +18.1% 6544 interrupts.CPU16.CAL:Function_call_interrupts
5560 ± 9% +18.5% 6589 interrupts.CPU17.CAL:Function_call_interrupts
15417 ± 18% -66.0% 5237 ± 47% interrupts.CPU2.RES:Rescheduling_interrupts
150.00 ± 74% +346.3% 669.50 ± 32% interrupts.CPU23.RES:Rescheduling_interrupts
145.25 ± 65% +430.1% 770.00 ± 61% interrupts.CPU24.RES:Rescheduling_interrupts
522.00 ± 75% +3786.5% 20287 ± 52% interrupts.CPU26.RES:Rescheduling_interrupts
268.00 ± 57% +1782.2% 5044 ± 80% interrupts.CPU27.RES:Rescheduling_interrupts
129.25 ± 48% +713.9% 1052 ± 82% interrupts.CPU28.RES:Rescheduling_interrupts
101.75 ± 31% +397.8% 506.50 ± 41% interrupts.CPU29.RES:Rescheduling_interrupts
9029 ± 10% -72.3% 2498 ± 65% interrupts.CPU3.RES:Rescheduling_interrupts
5206 ± 16% +28.1% 6672 interrupts.CPU36.CAL:Function_call_interrupts
5821 ± 12% -73.2% 1561 ± 51% interrupts.CPU4.RES:Rescheduling_interrupts
83.00 ± 17% -32.5% 56.00 ± 29% interrupts.CPU40.RES:Rescheduling_interrupts
4349 ± 30% -80.8% 836.00 ± 50% interrupts.CPU5.RES:Rescheduling_interrupts
2458 ± 44% -68.3% 780.00 ±114% interrupts.CPU55.NMI:Non-maskable_interrupts
2458 ± 44% -68.3% 780.00 ±114% interrupts.CPU55.PMI:Performance_monitoring_interrupts
3153 ± 48% -78.7% 673.25 ± 74% interrupts.CPU56.NMI:Non-maskable_interrupts
3153 ± 48% -78.7% 673.25 ± 74% interrupts.CPU56.PMI:Performance_monitoring_interrupts
3769 ± 13% -80.7% 728.75 ± 48% interrupts.CPU6.RES:Rescheduling_interrupts
336.75 ±126% -91.2% 29.50 ± 95% interrupts.CPU74.RES:Rescheduling_interrupts
1766 ± 54% -65.7% 606.50 ± 54% interrupts.CPU8.RES:Rescheduling_interrupts
30.00 ± 28% +432.5% 159.75 ± 92% interrupts.CPU85.RES:Rescheduling_interrupts
320.65 ±173% +2126.1% 7138 ±100% sched_debug.cfs_rq:/.MIN_vruntime.avg
60894 +24.1% 75590 ± 14% sched_debug.cfs_rq:/.exec_clock.avg
13190 ± 9% +144.9% 32297 ± 29% sched_debug.cfs_rq:/.exec_clock.stddev
320.65 ±173% +2126.1% 7138 ±100% sched_debug.cfs_rq:/.max_vruntime.avg
3022932 +29.2% 3906103 ± 13% sched_debug.cfs_rq:/.min_vruntime.avg
5123973 ± 16% +40.1% 7181169 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
593877 ± 8% +180.2% 1664130 ± 28% sched_debug.cfs_rq:/.min_vruntime.stddev
0.40 ± 6% +45.8% 0.59 ± 3% sched_debug.cfs_rq:/.nr_running.avg
1.71 ± 2% +67.3% 2.86 ± 19% sched_debug.cfs_rq:/.nr_spread_over.avg
1.29 ± 21% +52.4% 1.97 ± 15% sched_debug.cfs_rq:/.nr_spread_over.stddev
9.02 ± 3% +24.5% 11.23 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.avg
28.38 ± 16% -30.8% 19.62 sched_debug.cfs_rq:/.runnable_load_avg.max
11.55 ± 11% -26.2% 8.53 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.stddev
3730816 ± 19% +54.5% 5765499 sched_debug.cfs_rq:/.spread0.max
593850 ± 8% +180.2% 1664097 ± 28% sched_debug.cfs_rq:/.spread0.stddev
406.87 ± 7% +48.3% 603.22 ± 3% sched_debug.cfs_rq:/.util_avg.avg
28.33 ± 16% -30.6% 19.67 sched_debug.cpu.cpu_load[0].max
11.23 ± 9% -19.2% 9.07 sched_debug.cpu.cpu_load[0].stddev
48.62 ± 23% -59.1% 19.88 sched_debug.cpu.cpu_load[1].max
12.53 ± 7% -28.2% 9.00 sched_debug.cpu.cpu_load[1].stddev
67.50 ± 44% -68.6% 21.21 ± 11% sched_debug.cpu.cpu_load[2].max
14.14 ± 24% -36.6% 8.97 sched_debug.cpu.cpu_load[2].stddev
121.54 ±111% -81.0% 23.08 ± 12% sched_debug.cpu.cpu_load[3].max
19.34 ± 71% -53.4% 9.01 sched_debug.cpu.cpu_load[3].stddev
139.17 ±118% -80.4% 27.33 ± 14% sched_debug.cpu.cpu_load[4].max
20.88 ± 79% -55.4% 9.30 ± 3% sched_debug.cpu.cpu_load[4].stddev
27826 -25.9% 20621 sched_debug.cpu.curr->pid.max
12953 ± 3% -21.4% 10185 sched_debug.cpu.curr->pid.stddev
0.00 ± 17% +24.4% 0.00 ± 12% sched_debug.cpu.next_balance.stddev
122120 ± 3% +15.7% 141347 ± 2% sched_debug.cpu.nr_load_updates.max
10094 ± 7% +127.8% 22993 ± 25% sched_debug.cpu.nr_load_updates.stddev
0.37 ± 6% +15.8% 0.43 sched_debug.cpu.nr_running.avg
7428 ± 2% +23.6% 9182 ± 7% sched_debug.cpu.nr_switches.avg
42101 ± 7% +105.4% 86497 ± 25% sched_debug.cpu.nr_switches.max
6842 ± 3% +82.1% 12458 ± 10% sched_debug.cpu.nr_switches.stddev
6724 +24.8% 8389 ± 7% sched_debug.cpu.sched_count.avg
40147 ± 7% +109.1% 83942 ± 26% sched_debug.cpu.sched_count.max
6608 ± 3% +83.2% 12108 ± 11% sched_debug.cpu.sched_count.stddev
3230 +19.5% 3861 ± 6% sched_debug.cpu.sched_goidle.avg
19924 ± 7% +110.0% 41838 ± 26% sched_debug.cpu.sched_goidle.max
3301 ± 3% +78.9% 5906 ± 12% sched_debug.cpu.sched_goidle.stddev
3123 ± 2% +29.1% 4032 ± 7% sched_debug.cpu.ttwu_count.avg
27111 ± 20% +105.9% 55815 ± 34% sched_debug.cpu.ttwu_count.max
4227 ± 10% +83.8% 7771 ± 11% sched_debug.cpu.ttwu_count.stddev
992.95 +56.0% 1548 ± 10% sched_debug.cpu.ttwu_local.avg
3551 ± 15% +159.8% 9228 ± 48% sched_debug.cpu.ttwu_local.max
194.04 ± 71% +90.8% 370.25 ± 9% sched_debug.cpu.ttwu_local.min
603.41 ± 7% +114.2% 1292 ± 45% sched_debug.cpu.ttwu_local.stddev
19.92 -16.7 3.21 ± 15% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
21.42 -16.0 5.42 ± 8% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
21.46 -16.0 5.49 ± 8% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
21.46 -16.0 5.49 ± 8% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
21.46 -16.0 5.49 ± 8% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.calltrace.cycles-pp.secondary_startup_64
1.17 ± 3% +0.3 1.43 ± 2% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap
1.17 ± 3% +0.3 1.44 ± 2% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.55 +0.4 0.95 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep
1.43 ± 2% +0.5 1.97 perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
1.31 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.31 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.31 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.30 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.29 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
1.32 ± 3% +0.6 1.90 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.32 ± 3% +0.6 1.90 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.26 ± 3% +0.6 1.85 ± 3% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.26 ± 3% +0.6 1.85 ± 3% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
0.92 ± 2% +0.6 1.54 perf-profile.calltrace.cycles-pp.clear_huge_page
0.55 ±102% +0.9 1.42 ± 11% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.69 ± 4% +1.0 1.69 ± 2% perf-profile.calltrace.cycles-pp.clear_page_erms
1.66 ± 2% +1.1 2.72 perf-profile.calltrace.cycles-pp.___might_sleep.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.77 ± 4% +1.7 2.44 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
0.76 ± 4% +1.7 2.43 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
2.43 ± 2% +2.1 4.53 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
2.44 ± 2% +2.1 4.55 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
60.91 +9.2 70.07 perf-profile.calltrace.cycles-pp.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
69.30 +10.9 80.25 perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
72.23 +13.1 85.35 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.25 +13.1 85.38 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.11 +13.1 85.25 perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
72.31 +13.2 85.48 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
72.31 +13.2 85.48 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
72.32 +13.2 85.50 perf-profile.calltrace.cycles-pp.page_fault
20.15 ± 2% -16.6 3.55 ± 4% perf-profile.children.cycles-pp.intel_idle
21.46 -16.0 5.49 ± 8% perf-profile.children.cycles-pp.start_secondary
21.71 ± 2% -15.9 5.82 ± 6% perf-profile.children.cycles-pp.cpuidle_enter_state
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.children.cycles-pp.do_idle
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.children.cycles-pp.secondary_startup_64
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.children.cycles-pp.cpu_startup_entry
0.14 ± 16% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.read
0.17 ± 6% -0.0 0.15 ± 4% perf-profile.children.cycles-pp.native_irq_return_iret
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.__list_del_entry_valid
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.cmd_stat
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.__run_perf_stat
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.process_interval
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.read_counters
0.07 ± 17% +0.0 0.10 ± 10% perf-profile.children.cycles-pp.perf_event_read
0.08 ± 19% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.08 ± 19% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.__read_nocancel
0.08 ± 16% +0.0 0.11 ± 11% perf-profile.children.cycles-pp.perf_read
0.07 ± 15% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.smp_call_function_single
0.03 ±100% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.57 ± 5% +0.0 0.61 ± 4% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.1 0.05 perf-profile.children.cycles-pp.___perf_sw_event
0.08 ± 8% +0.1 0.13 ± 3% perf-profile.children.cycles-pp.pte_alloc_one
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.__perf_sw_event
0.05 ± 62% +0.1 0.12 ± 19% perf-profile.children.cycles-pp.ktime_get
0.40 ± 2% +0.1 0.49 ± 2% perf-profile.children.cycles-pp.rcu_all_qs
0.13 ± 3% +0.1 0.25 ± 17% perf-profile.children.cycles-pp.__put_compound_page
0.10 ± 8% +0.1 0.23 ± 18% perf-profile.children.cycles-pp.__page_cache_release
0.00 +0.2 0.16 ± 4% perf-profile.children.cycles-pp.free_transhuge_page
0.20 ± 4% +0.2 0.41 ± 5% perf-profile.children.cycles-pp.free_one_page
1.17 ± 3% +0.3 1.44 ± 2% perf-profile.children.cycles-pp.tlb_flush_mmu_free
1.18 ± 3% +0.3 1.46 ± 2% perf-profile.children.cycles-pp.release_pages
0.07 ± 6% +0.3 0.39 ± 11% perf-profile.children.cycles-pp.zap_huge_pmd
0.00 +0.3 0.34 ± 12% perf-profile.children.cycles-pp.deferred_split_huge_page
1.30 ± 3% +0.6 1.88 ± 3% perf-profile.children.cycles-pp.mmput
1.31 ± 3% +0.6 1.89 ± 3% perf-profile.children.cycles-pp.__wake_up_parent
1.31 ± 3% +0.6 1.89 ± 3% perf-profile.children.cycles-pp.do_group_exit
1.31 ± 3% +0.6 1.89 ± 3% perf-profile.children.cycles-pp.do_exit
1.30 ± 3% +0.6 1.88 ± 3% perf-profile.children.cycles-pp.exit_mmap
1.27 ± 3% +0.6 1.85 ± 3% perf-profile.children.cycles-pp.unmap_vmas
1.26 ± 3% +0.6 1.85 ± 3% perf-profile.children.cycles-pp.unmap_page_range
1.71 +0.6 2.34 perf-profile.children.cycles-pp._cond_resched
0.67 ± 73% +0.8 1.42 ± 11% perf-profile.children.cycles-pp.poll_idle
2.22 ± 2% +1.5 3.67 perf-profile.children.cycles-pp.___might_sleep
2.54 ± 2% +2.2 4.69 ± 2% perf-profile.children.cycles-pp.__alloc_pages_nodemask
2.52 ± 2% +2.2 4.68 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
0.97 ± 4% +2.2 3.19 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.57 ± 14% +2.3 3.85 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
62.03 +10.2 72.23 perf-profile.children.cycles-pp.clear_page_erms
70.21 +11.6 81.78 perf-profile.children.cycles-pp.clear_huge_page
72.25 +13.1 85.37 perf-profile.children.cycles-pp.__handle_mm_fault
72.27 +13.1 85.40 perf-profile.children.cycles-pp.handle_mm_fault
72.11 +13.1 85.26 perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
72.33 +13.2 85.50 perf-profile.children.cycles-pp.do_page_fault
72.33 +13.2 85.50 perf-profile.children.cycles-pp.__do_page_fault
72.34 +13.2 85.51 perf-profile.children.cycles-pp.page_fault
20.15 ± 2% -16.6 3.55 ± 4% perf-profile.self.cycles-pp.intel_idle
0.78 ± 4% -0.2 0.59 ± 4% perf-profile.self.cycles-pp.__free_pages_ok
0.17 ± 6% -0.0 0.15 ± 4% perf-profile.self.cycles-pp.native_irq_return_iret
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.07 ± 15% +0.0 0.11 ± 8% perf-profile.self.cycles-pp.smp_call_function_single
0.03 ±100% +0.1 0.11 ± 23% perf-profile.self.cycles-pp.ktime_get
0.39 ± 2% +0.1 0.48 ± 2% perf-profile.self.cycles-pp.rcu_all_qs
1.61 +0.4 2.00 perf-profile.self.cycles-pp.get_page_from_freelist
1.46 +0.6 2.07 perf-profile.self.cycles-pp._cond_resched
0.66 ± 73% +0.8 1.42 ± 11% perf-profile.self.cycles-pp.poll_idle
6.08 +0.8 6.88 perf-profile.self.cycles-pp.clear_huge_page
2.19 +1.5 3.65 perf-profile.self.cycles-pp.___might_sleep
1.57 ± 14% +2.3 3.85 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
61.40 +10.3 71.66 perf-profile.self.cycles-pp.clear_page_erms
***************************************************************************************************
lkp-hsx04: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
=========================================================================================
compiler/cpufreq_governor/iterations/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/30/x86_64-rhel-7.2/1600%/debian-x86_64-2016-08-31.cgz/lkp-hsx04/compute/reaim
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_error_entry/0x
1:4 -25% :4 dmesg.WARNING:at#for_ip_retint_user/0x
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
:4 50% 2:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
6:4 0% 6:4 perf-profile.calltrace.cycles-pp.error_entry
7:4 -1% 7:4 perf-profile.children.cycles-pp.error_entry
5:4 1% 5:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
124.00 -1.4% 122.25 reaim.child_systime
87.40 -9.5% 79.08 reaim.jti
12.09 ± 2% +69.5% 20.49 ± 2% reaim.std_dev_percent
1.53 +50.7% 2.30 ± 2% reaim.std_dev_time
17534460 -5.9% 16498575 reaim.time.involuntary_context_switches
3734 -1.4% 3683 reaim.time.system_time
49689016 ± 17% +26.1% 62671778 ± 6% cpuidle.POLL.time
0.39 ± 3% -15.5% 0.33 ± 10% turbostat.Pkg%pc6
70.96 -2.2% 69.38 turbostat.RAMWatt
1695 -10.0% 1525 vmstat.procs.r
62117 -3.2% 60144 vmstat.system.cs
7903 ± 4% -56.9% 3405 ± 24% slabinfo.eventpoll_epi.active_objs
7903 ± 4% -56.9% 3405 ± 24% slabinfo.eventpoll_epi.num_objs
6915 ± 4% -56.9% 2979 ± 24% slabinfo.eventpoll_pwq.active_objs
6915 ± 4% -56.9% 2979 ± 24% slabinfo.eventpoll_pwq.num_objs
1872863 -15.2% 1587464 ± 2% meminfo.Active
1789974 -15.8% 1506890 meminfo.Active(anon)
1680128 -17.4% 1388186 meminfo.AnonPages
3268127 -13.1% 2840098 meminfo.Committed_AS
1855 -58.7% 766.75 meminfo.Mlocked
1861 -58.5% 773.25 meminfo.Unevictable
1.69 -0.2 1.52 perf-stat.cache-miss-rate%
2.587e+10 -10.6% 2.313e+10 perf-stat.cache-misses
30767306 -3.3% 29760895 perf-stat.context-switches
5874734 -14.8% 5007295 perf-stat.cpu-migrations
71.33 -0.8 70.58 perf-stat.node-load-miss-rate%
8.155e+09 -10.6% 7.291e+09 perf-stat.node-load-misses
3.277e+09 -7.3% 3.039e+09 perf-stat.node-loads
2.765e+09 -11.5% 2.448e+09 ± 2% perf-stat.node-store-misses
1.519e+10 -13.1% 1.32e+10 perf-stat.node-stores
396395 ± 4% -18.4% 323621 ± 7% numa-meminfo.node1.AnonPages
281797 ± 8% -9.8% 254296 ± 3% numa-meminfo.node1.Inactive(file)
521557 ± 6% -17.7% 429170 ± 5% numa-meminfo.node2.Active
500694 ± 7% -18.6% 407636 ± 5% numa-meminfo.node2.Active(anon)
478540 ± 8% -17.2% 396013 ± 12% numa-meminfo.node3.Active
459279 ± 7% -17.9% 377244 ± 13% numa-meminfo.node3.Active(anon)
32735 ± 41% -97.8% 716.00 ±100% numa-meminfo.node3.AnonHugePages
434193 ± 4% -23.8% 330780 ± 11% numa-meminfo.node3.AnonPages
285808 ± 5% -10.7% 255254 ± 4% numa-meminfo.node3.Inactive
280289 ± 6% -9.2% 254380 ± 3% numa-meminfo.node3.Inactive(file)
445067 -15.6% 375712 proc-vmstat.nr_active_anon
417525 -17.1% 346011 proc-vmstat.nr_anon_pages
56599 -5.0% 53752 proc-vmstat.nr_kernel_stack
463.50 -58.9% 190.50 proc-vmstat.nr_mlock
100103 -2.7% 97448 proc-vmstat.nr_slab_unreclaimable
465.00 -58.7% 192.25 proc-vmstat.nr_unevictable
445067 -15.6% 375712 proc-vmstat.nr_zone_active_anon
465.00 -58.7% 192.25 proc-vmstat.nr_zone_unevictable
195.25 ± 56% +729.4% 1619 ± 87% proc-vmstat.numa_hint_faults_local
7687 ± 3% +4.5% 8036 ± 2% proc-vmstat.numa_pte_updates
634.12 ± 6% -26.8% 464.15 ± 12% sched_debug.cfs_rq:/.exec_clock.stddev
49884777 ± 5% +21.8% 60745074 sched_debug.cfs_rq:/.min_vruntime.avg
56196226 ± 6% +25.2% 70352699 sched_debug.cfs_rq:/.min_vruntime.max
43890810 ± 4% +15.7% 50767701 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
2523222 ± 6% +98.7% 5013049 ± 19% sched_debug.cfs_rq:/.min_vruntime.stddev
1.56 ± 5% +12.6% 1.76 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
-5929434 +78.4% -10580036 sched_debug.cfs_rq:/.spread0.min
2487301 ± 6% +101.3% 5007639 ± 19% sched_debug.cfs_rq:/.spread0.stddev
243.96 ± 2% -10.0% 219.50 sched_debug.cfs_rq:/.util_avg.stddev
670504 ± 5% -15.4% 566996 ± 6% sched_debug.cpu.avg_idle.min
58664 ± 11% +40.9% 82632 ± 12% sched_debug.cpu.avg_idle.stddev
470.51 ± 17% -47.7% 246.06 ± 20% sched_debug.cpu.clock.stddev
470.51 ± 17% -47.7% 246.06 ± 20% sched_debug.cpu.clock_task.stddev
7233 ± 9% +18.2% 8551 ± 12% sched_debug.cpu.load.avg
137222 ± 62% +119.4% 301049 ± 35% sched_debug.cpu.load.max
15787 ± 43% +85.8% 29335 ± 30% sched_debug.cpu.load.stddev
0.00 ± 17% -35.3% 0.00 ± 16% sched_debug.cpu.next_balance.stddev
169.75 ± 13% -25.0% 127.31 ± 8% sched_debug.cpu.sched_goidle.min
3.38 +39.4% 4.71 ± 15% sched_debug.rt_rq:/.rt_runtime.stddev
131.75 ± 23% -58.3% 55.00 ± 28% numa-vmstat.node0.nr_mlock
132.00 ± 23% -58.1% 55.25 ± 27% numa-vmstat.node0.nr_unevictable
132.00 ± 23% -58.1% 55.25 ± 27% numa-vmstat.node0.nr_zone_unevictable
98455 ± 4% -18.2% 80516 ± 6% numa-vmstat.node1.nr_anon_pages
319.25 ±142% -100.0% 0.00 numa-vmstat.node1.nr_dirtied
70449 ± 8% -9.8% 63574 ± 3% numa-vmstat.node1.nr_inactive_file
99.50 ± 3% -60.1% 39.75 numa-vmstat.node1.nr_mlock
100.00 ± 4% -60.2% 39.75 numa-vmstat.node1.nr_unevictable
319.25 ±142% -100.0% 0.00 numa-vmstat.node1.nr_written
70449 ± 8% -9.8% 63574 ± 3% numa-vmstat.node1.nr_zone_inactive_file
100.00 ± 4% -60.2% 39.75 numa-vmstat.node1.nr_zone_unevictable
124407 ± 7% -18.3% 101653 ± 5% numa-vmstat.node2.nr_active_anon
124405 ± 7% -18.3% 101652 ± 5% numa-vmstat.node2.nr_zone_active_anon
114276 ± 7% -17.8% 93899 ± 12% numa-vmstat.node3.nr_active_anon
108032 ± 4% -23.8% 82312 ± 11% numa-vmstat.node3.nr_anon_pages
70071 ± 6% -9.2% 63594 ± 3% numa-vmstat.node3.nr_inactive_file
134.00 ± 21% -64.6% 47.50 ± 28% numa-vmstat.node3.nr_mlock
135.00 ± 21% -64.6% 47.75 ± 28% numa-vmstat.node3.nr_unevictable
114275 ± 7% -17.8% 93901 ± 12% numa-vmstat.node3.nr_zone_active_anon
70071 ± 6% -9.2% 63594 ± 3% numa-vmstat.node3.nr_zone_inactive_file
135.00 ± 21% -64.6% 47.75 ± 28% numa-vmstat.node3.nr_zone_unevictable
1.411e+09 ± 8% -3.3e+08 1.077e+09 ± 14% syscalls.sys_brk.noise.100%
1.42e+09 ± 7% -3.3e+08 1.086e+09 ± 14% syscalls.sys_brk.noise.2%
1.416e+09 ± 7% -3.3e+08 1.082e+09 ± 14% syscalls.sys_brk.noise.25%
1.42e+09 ± 7% -3.3e+08 1.085e+09 ± 14% syscalls.sys_brk.noise.5%
1.414e+09 ± 8% -3.3e+08 1.08e+09 ± 14% syscalls.sys_brk.noise.50%
1.413e+09 ± 8% -3.3e+08 1.079e+09 ± 14% syscalls.sys_brk.noise.75%
4.046e+09 ± 13% -1.3e+09 2.793e+09 ± 6% syscalls.sys_newstat.noise.100%
4.119e+09 ± 12% -1.3e+09 2.868e+09 ± 6% syscalls.sys_newstat.noise.2%
4.101e+09 ± 12% -1.3e+09 2.849e+09 ± 6% syscalls.sys_newstat.noise.25%
4.117e+09 ± 12% -1.3e+09 2.866e+09 ± 6% syscalls.sys_newstat.noise.5%
4.08e+09 ± 12% -1.3e+09 2.828e+09 ± 6% syscalls.sys_newstat.noise.50%
4.062e+09 ± 12% -1.3e+09 2.811e+09 ± 6% syscalls.sys_newstat.noise.75%
1.541e+11 ± 10% -4.7e+10 1.072e+11 ± 15% syscalls.sys_read.noise.100%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.2%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.25%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.5%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.50%
1.541e+11 ± 10% -4.7e+10 1.072e+11 ± 15% syscalls.sys_read.noise.75%
130453 ± 16% -69.2% 40150 ±103% syscalls.sys_rt_sigaction.max
19777092 ± 4% -1.3e+07 6543904 ±100% syscalls.sys_rt_sigaction.noise.100%
27560343 ± 2% -1.7e+07 10538623 ±100% syscalls.sys_rt_sigaction.noise.2%
26718095 ± 3% -1.7e+07 9971159 ±100% syscalls.sys_rt_sigaction.noise.25%
27550355 ± 2% -1.7e+07 10510393 ±100% syscalls.sys_rt_sigaction.noise.5%
24718035 ± 3% -1.6e+07 8356079 ±100% syscalls.sys_rt_sigaction.noise.50%
22249116 ± 4% -1.5e+07 7149959 ±100% syscalls.sys_rt_sigaction.noise.75%
27266292 ± 11% -1.6e+07 11532735 ±100% syscalls.sys_times.noise.100%
32337364 ± 9% -1.8e+07 14209280 ±100% syscalls.sys_times.noise.2%
31159606 ± 9% -1.8e+07 13621578 ±100% syscalls.sys_times.noise.25%
32279406 ± 9% -1.8e+07 14182805 ±100% syscalls.sys_times.noise.5%
30086951 ± 9% -1.7e+07 13027260 ±100% syscalls.sys_times.noise.50%
28978220 ± 10% -1.7e+07 12426543 ±100% syscalls.sys_times.noise.75%
4922 ± 13% -12.0% 4333 interrupts.CPU102.CAL:Function_call_interrupts
15763 ± 12% -17.4% 13021 ± 3% interrupts.CPU104.RES:Rescheduling_interrupts
4930 ± 14% -12.3% 4325 interrupts.CPU11.CAL:Function_call_interrupts
4910 ± 13% -12.1% 4318 interrupts.CPU118.CAL:Function_call_interrupts
4935 ± 14% -17.6% 4064 ± 10% interrupts.CPU119.CAL:Function_call_interrupts
4926 ± 13% -12.0% 4332 interrupts.CPU120.CAL:Function_call_interrupts
4838 ± 32% +34.0% 6483 ± 21% interrupts.CPU122.NMI:Non-maskable_interrupts
4838 ± 32% +34.0% 6483 ± 21% interrupts.CPU122.PMI:Performance_monitoring_interrupts
4913 ± 14% -17.4% 4058 ± 8% interrupts.CPU123.CAL:Function_call_interrupts
4907 ± 13% -11.7% 4330 interrupts.CPU124.CAL:Function_call_interrupts
14843 ± 3% -10.8% 13246 ± 3% interrupts.CPU126.RES:Rescheduling_interrupts
15606 -17.6% 12854 ± 4% interrupts.CPU130.RES:Rescheduling_interrupts
15198 ± 8% -14.3% 13028 interrupts.CPU131.RES:Rescheduling_interrupts
4902 ± 14% -12.6% 4286 interrupts.CPU134.CAL:Function_call_interrupts
4878 ± 14% -12.2% 4285 interrupts.CPU14.CAL:Function_call_interrupts
4945 ± 14% -13.1% 4297 interrupts.CPU140.CAL:Function_call_interrupts
15827 ± 4% -13.6% 13669 ± 5% interrupts.CPU141.RES:Rescheduling_interrupts
4786 ± 12% -14.4% 4097 ± 7% interrupts.CPU18.CAL:Function_call_interrupts
15216 ± 12% -15.3% 12883 ± 2% interrupts.CPU18.RES:Rescheduling_interrupts
15525 ± 5% -15.0% 13200 ± 2% interrupts.CPU19.RES:Rescheduling_interrupts
14771 ± 3% -7.8% 13620 interrupts.CPU2.RES:Rescheduling_interrupts
15066 ± 7% -10.6% 13468 ± 5% interrupts.CPU20.RES:Rescheduling_interrupts
4822 ± 10% -9.7% 4352 interrupts.CPU21.CAL:Function_call_interrupts
4908 ± 14% -12.3% 4305 interrupts.CPU26.CAL:Function_call_interrupts
4944 ± 31% +54.2% 7623 ± 5% interrupts.CPU27.NMI:Non-maskable_interrupts
4944 ± 31% +54.2% 7623 ± 5% interrupts.CPU27.PMI:Performance_monitoring_interrupts
4947 ± 13% -14.2% 4246 ± 4% interrupts.CPU3.CAL:Function_call_interrupts
4031 ± 5% +73.1% 6977 ± 22% interrupts.CPU3.NMI:Non-maskable_interrupts
4031 ± 5% +73.1% 6977 ± 22% interrupts.CPU3.PMI:Performance_monitoring_interrupts
15219 ± 4% -10.9% 13568 ± 5% interrupts.CPU3.RES:Rescheduling_interrupts
4849 ± 32% +54.9% 7510 ± 8% interrupts.CPU30.NMI:Non-maskable_interrupts
4849 ± 32% +54.9% 7510 ± 8% interrupts.CPU30.PMI:Performance_monitoring_interrupts
15365 ± 5% -10.0% 13833 ± 6% interrupts.CPU33.RES:Rescheduling_interrupts
4758 ± 21% +63.9% 7797 interrupts.CPU5.NMI:Non-maskable_interrupts
4758 ± 21% +63.9% 7797 interrupts.CPU5.PMI:Performance_monitoring_interrupts
4937 ± 14% -12.5% 4321 interrupts.CPU56.CAL:Function_call_interrupts
4932 ± 14% -12.0% 4340 interrupts.CPU58.CAL:Function_call_interrupts
4935 ± 14% -11.9% 4347 interrupts.CPU60.CAL:Function_call_interrupts
4836 ± 32% +35.3% 6542 ± 21% interrupts.CPU60.NMI:Non-maskable_interrupts
4836 ± 32% +35.3% 6542 ± 21% interrupts.CPU60.PMI:Performance_monitoring_interrupts
4867 ± 14% -12.5% 4260 interrupts.CPU62.CAL:Function_call_interrupts
4922 ± 14% -12.0% 4333 interrupts.CPU64.CAL:Function_call_interrupts
15118 ± 7% -14.0% 13008 ± 8% interrupts.CPU64.RES:Rescheduling_interrupts
4922 ± 13% -12.1% 4329 interrupts.CPU65.CAL:Function_call_interrupts
15324 ± 9% -12.5% 13415 ± 3% interrupts.CPU67.RES:Rescheduling_interrupts
4884 ± 14% -13.0% 4248 interrupts.CPU71.CAL:Function_call_interrupts
4890 ± 14% -11.8% 4311 interrupts.CPU77.CAL:Function_call_interrupts
4889 ± 13% -11.4% 4330 interrupts.CPU80.CAL:Function_call_interrupts
14898 ± 3% -9.4% 13504 ± 2% interrupts.CPU80.RES:Rescheduling_interrupts
15793 ± 12% -13.6% 13651 ± 3% interrupts.CPU83.RES:Rescheduling_interrupts
14835 ± 3% -9.2% 13466 ± 3% interrupts.CPU85.RES:Rescheduling_interrupts
4831 ± 32% +41.0% 6809 ± 15% interrupts.CPU86.NMI:Non-maskable_interrupts
4831 ± 32% +41.0% 6809 ± 15% interrupts.CPU86.PMI:Performance_monitoring_interrupts
15141 ± 11% -14.7% 12921 ± 2% interrupts.CPU91.RES:Rescheduling_interrupts
4919 ± 13% -12.0% 4328 interrupts.CPU94.CAL:Function_call_interrupts
15256 ± 11% -10.1% 13721 ± 6% interrupts.CPU94.RES:Rescheduling_interrupts
15869 ± 7% -13.2% 13771 ± 4% interrupts.CPU96.RES:Rescheduling_interrupts
4919 ± 13% -12.3% 4316 interrupts.CPU97.CAL:Function_call_interrupts
4.50 ± 3% -0.3 4.23 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
4.50 ± 3% -0.3 4.23 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
4.48 ± 3% -0.3 4.22 perf-profile.calltrace.cycles-pp.do_execveat_common.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
4.55 ± 3% -0.2 4.35 perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.45 ± 2% -0.1 2.31 perf-profile.calltrace.cycles-pp.__libc_fork
1.67 ± 2% -0.1 1.55 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.22 ± 6% -0.1 1.13 ± 6% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
2.17 -0.1 2.10 perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.60 -0.1 1.53 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.69 ± 4% -0.1 0.62 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sync
0.69 ± 4% -0.1 0.63 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sync
0.68 ± 4% -0.1 0.62 ± 2% perf-profile.calltrace.cycles-pp.sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe.sync
0.62 ± 4% -0.1 0.56 ± 3% perf-profile.calltrace.cycles-pp.iterate_supers.sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe.sync
0.92 ± 2% -0.0 0.88 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.71 ± 3% -0.0 0.67 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.setlocale
0.72 -0.0 0.68 ± 3% perf-profile.calltrace.cycles-pp.rcu_process_callbacks.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.66 ± 5% +0.0 0.71 ± 2% perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.__handle_mm_fault.handle_mm_fault
1.23 ± 3% +0.1 1.30 ± 2% perf-profile.calltrace.cycles-pp.__lru_cache_add.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.06 ± 4% +0.1 1.16 perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
1.50 ± 3% +0.1 1.61 ± 4% perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
1.89 +0.1 2.02 ± 2% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
12.85 +0.2 13.05 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
13.47 +0.2 13.67 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
16.66 +0.2 16.91 perf-profile.calltrace.cycles-pp.page_fault
16.49 +0.3 16.75 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
16.46 +0.3 16.72 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
0.26 ±100% +0.3 0.54 ± 3% perf-profile.calltrace.cycles-pp.selinux_vm_enough_memory.security_vm_enough_memory_mm.do_brk_flags.sys_brk.do_syscall_64
0.00 +0.5 0.51 ± 2% perf-profile.calltrace.cycles-pp.__xstat64
1.24 ± 16% -0.3 0.90 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
4.98 ± 2% -0.2 4.80 perf-profile.children.cycles-pp.unmap_vmas
1.90 ± 4% -0.1 1.76 ± 5% perf-profile.children.cycles-pp.__softirqentry_text_start
2.45 ± 2% -0.1 2.31 perf-profile.children.cycles-pp.__libc_fork
2.38 -0.1 2.27 perf-profile.children.cycles-pp.do_mmap
2.73 -0.1 2.63 perf-profile.children.cycles-pp.free_pgtables
2.10 -0.1 2.00 perf-profile.children.cycles-pp.mmap_region
2.54 -0.1 2.45 perf-profile.children.cycles-pp.vm_mmap_pgoff
0.74 ± 3% -0.1 0.66 ± 3% perf-profile.children.cycles-pp.wake_up_new_task
2.11 -0.1 2.03 perf-profile.children.cycles-pp.path_openat
2.12 -0.1 2.04 perf-profile.children.cycles-pp.do_filp_open
0.69 ± 3% -0.1 0.62 ± 2% perf-profile.children.cycles-pp.sys_sync
0.36 ± 4% -0.1 0.30 ± 5% perf-profile.children.cycles-pp.update_rq_clock
0.62 ± 4% -0.1 0.57 ± 3% perf-profile.children.cycles-pp.iterate_supers
0.15 ± 6% -0.0 0.11 ± 9% perf-profile.children.cycles-pp.mark_page_accessed
0.09 ± 11% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.alloc_vmap_area
0.78 ± 2% -0.0 0.75 perf-profile.children.cycles-pp.lookup_fast
0.11 ± 6% -0.0 0.08 ± 17% perf-profile.children.cycles-pp.__get_vm_area_node
0.15 ± 7% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.__vunmap
0.15 ± 5% -0.0 0.12 ± 8% perf-profile.children.cycles-pp.__d_alloc
0.15 ± 5% -0.0 0.13 ± 5% perf-profile.children.cycles-pp.free_work
0.31 ± 3% -0.0 0.29 perf-profile.children.cycles-pp.__update_load_avg_se
0.10 ± 7% +0.0 0.12 ± 7% perf-profile.children.cycles-pp.generic_permission
0.10 ± 8% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.10 ± 14% +0.0 0.14 ± 10% perf-profile.children.cycles-pp.__do_fault
0.47 ± 3% +0.0 0.51 ± 2% perf-profile.children.cycles-pp.__xstat64
0.01 ±173% +0.0 0.05 ± 9% perf-profile.children.cycles-pp.__alloc_fd
0.68 +0.0 0.73 ± 3% perf-profile.children.cycles-pp.sys_read
0.18 ± 4% +0.1 0.23 ± 16% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.41 ± 6% +0.1 0.48 ± 8% perf-profile.children.cycles-pp.mem_cgroup_try_charge
2.12 ± 3% +0.1 2.20 perf-profile.children.cycles-pp.vfs_statx
1.60 ± 2% +0.1 1.70 ± 3% perf-profile.children.cycles-pp.pagevec_lru_move_fn
1.64 ± 2% +0.1 1.75 ± 3% perf-profile.children.cycles-pp.task_tick_fair
1.21 ± 4% +0.1 1.31 perf-profile.children.cycles-pp.__perf_sw_event
0.35 ± 9% +0.1 0.46 ± 4% perf-profile.children.cycles-pp.wait_consider_task
2.10 ± 2% +0.1 2.22 perf-profile.children.cycles-pp.syscall_return_via_sysret
3.41 ± 2% +0.1 3.54 perf-profile.children.cycles-pp.tlb_flush_mmu_free
0.75 ± 4% +0.1 0.89 ± 4% perf-profile.children.cycles-pp.SYSC_wait4
0.75 ± 4% +0.1 0.89 ± 4% perf-profile.children.cycles-pp.kernel_wait4
0.70 ± 5% +0.1 0.84 ± 5% perf-profile.children.cycles-pp.do_wait
5.25 +0.2 5.42 perf-profile.children.cycles-pp.alloc_pages_vma
19.59 +0.2 19.82 perf-profile.children.cycles-pp.do_page_fault
1.23 ± 16% -0.3 0.90 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
3.01 -0.1 2.90 perf-profile.self.cycles-pp.unmap_page_range
0.32 ± 6% -0.1 0.26 ± 6% perf-profile.self.cycles-pp.update_rq_clock
0.19 ± 13% -0.0 0.15 ± 7% perf-profile.self.cycles-pp.scheduler_tick
1.20 ± 2% -0.0 1.16 perf-profile.self.cycles-pp._dl_addr
0.84 -0.0 0.81 perf-profile.self.cycles-pp.copy_page_range
0.15 ± 7% -0.0 0.11 ± 9% perf-profile.self.cycles-pp.mark_page_accessed
0.09 -0.0 0.07 ± 16% perf-profile.self.cycles-pp.__d_alloc
0.15 ± 3% -0.0 0.14 ± 6% perf-profile.self.cycles-pp.lru_cache_add_active_or_unevictable
0.22 ± 8% +0.0 0.26 ± 5% perf-profile.self.cycles-pp.__perf_sw_event
0.18 ± 4% +0.0 0.23 ± 16% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.07 +0.1 0.12 ± 4% perf-profile.self.cycles-pp.queued_write_lock_slowpath
0.69 ± 5% +0.1 0.76 ± 2% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.69 ± 7% +0.1 0.78 ± 4% perf-profile.self.cycles-pp.__vma_adjust
0.05 ± 62% +0.1 0.14 ± 20% perf-profile.self.cycles-pp.wait_consider_task
2.10 ± 2% +0.1 2.22 perf-profile.self.cycles-pp.syscall_return_via_sysret
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
pipe/gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep6/stress-ng/1s
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
3463075 ± 10% +76.2% 6101037 ± 32% stress-ng.fifo.ops
3460753 ± 10% +76.2% 6096158 ± 32% stress-ng.fifo.ops_per_sec
22252433 ± 4% -41.9% 12934864 ± 38% cpuidle.C1.time
1240735 ± 5% -41.6% 724182 ± 38% cpuidle.C1.usage
9537 ± 12% +72.0% 16402 ± 19% sched_debug.cpu.nr_switches.max
1567 ± 5% +28.0% 2006 ± 11% sched_debug.cpu.nr_switches.stddev
1239038 ± 5% -41.7% 722719 ± 38% turbostat.C1
3.32 ± 5% -1.4 1.89 ± 41% turbostat.C1%
696934 ± 3% +7.9% 751814 ± 6% turbostat.IRQ
2473 ± 9% -25.8% 1834 ± 10% slabinfo.eventpoll_epi.active_objs
2473 ± 9% -25.8% 1834 ± 10% slabinfo.eventpoll_epi.num_objs
4267 ± 9% -25.8% 3165 ± 10% slabinfo.eventpoll_pwq.active_objs
4267 ± 9% -25.8% 3165 ± 10% slabinfo.eventpoll_pwq.num_objs
316.70 ± 42% +286.3% 1223 ± 88% interrupts.CPU1.RES:Rescheduling_interrupts
214.30 ± 35% +225.2% 697.00 ± 67% interrupts.CPU14.RES:Rescheduling_interrupts
249.60 ± 36% +241.5% 852.30 ± 82% interrupts.CPU15.RES:Rescheduling_interrupts
280.70 ± 45% +166.1% 746.90 ± 68% interrupts.CPU17.RES:Rescheduling_interrupts
290.10 ± 31% +178.8% 808.70 ± 88% interrupts.CPU2.RES:Rescheduling_interrupts
221.70 ± 27% +239.7% 753.10 ± 63% interrupts.CPU20.RES:Rescheduling_interrupts
256.40 ± 28% +174.5% 703.90 ± 97% interrupts.CPU49.RES:Rescheduling_interrupts
22379 ± 11% +107.5% 46443 ± 20% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
cpu/gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep6/stress-ng/1s
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
189975 -25.8% 140873 ± 7% stress-ng.context.ops
190032 -25.9% 140895 ± 7% stress-ng.context.ops_per_sec
180580 -15.3% 152874 ± 9% stress-ng.hsearch.ops
180604 -15.4% 152852 ± 9% stress-ng.hsearch.ops_per_sec
47965 +6.3% 50971 stress-ng.time.involuntary_context_switches
4076749 +6.0% 4319630 stress-ng.time.minor_page_faults
6259 ± 3% -8.8% 5706 ± 2% stress-ng.time.percent_of_cpu_this_job_got
1601 -8.3% 1468 stress-ng.time.user_time
836.00 -17.3% 691.50 ± 3% stress-ng.tsearch.ops
806.28 -17.1% 668.40 ± 3% stress-ng.tsearch.ops_per_sec
103796 ± 48% -49.0% 52979 ± 9% meminfo.AnonHugePages
54.87 ± 3% -6.2 48.67 ± 6% mpstat.cpu.usr%
64134 ± 45% -62.1% 24298 ± 56% numa-meminfo.node0.AnonHugePages
3.66 ±105% -3.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.66 ±105% -3.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
338.42 ± 4% -10.1% 304.30 sched_debug.cfs_rq:/.util_avg.avg
10253 ± 21% +66.7% 17088 ± 6% sched_debug.cpu.nr_switches.max
1602 ± 7% +25.6% 2013 ± 5% sched_debug.cpu.nr_switches.stddev
2071 ± 15% -18.7% 1683 ± 8% slabinfo.eventpoll_epi.num_objs
3605 ± 15% -19.2% 2912 ± 8% slabinfo.eventpoll_pwq.num_objs
490.00 ± 10% -28.0% 353.00 ± 3% slabinfo.file_lock_cache.num_objs
11628 ± 3% -8.3% 10665 ± 2% slabinfo.kmalloc-96.active_objs
33103068 ± 5% -14.1% 28438786 ± 7% cpuidle.C3.time
141695 ± 5% -8.0% 130380 ± 6% cpuidle.C3.usage
6.907e+08 ± 10% +29.0% 8.912e+08 ± 11% cpuidle.C6.time
642637 ± 11% +20.4% 773937 ± 8% cpuidle.C6.usage
3162428 ± 53% +81.0% 5724968 ± 26% cpuidle.POLL.time
22657 ± 2% -9.2% 20570 softirqs.CPU46.TIMER
22637 ± 3% -7.4% 20972 ± 4% softirqs.CPU5.TIMER
21862 ± 3% -7.9% 20139 ± 2% softirqs.CPU52.TIMER
22165 ± 4% -9.8% 19993 softirqs.CPU54.TIMER
21851 ± 4% -8.6% 19977 ± 2% softirqs.CPU59.TIMER
9771 +2.0% 9966 proc-vmstat.nr_mapped
2767 ± 2% -5.0% 2629 ± 2% proc-vmstat.nr_page_table_pages
1602 ± 13% +188.0% 4614 ± 27% proc-vmstat.numa_hint_faults
379391 ± 2% -38.9% 231826 ± 16% proc-vmstat.numa_pte_updates
4840033 +4.8% 5074161 proc-vmstat.pgfault
2502 ± 80% -81.7% 457.50 ± 7% proc-vmstat.thp_fault_alloc
2011 ± 2% -8.6% 1839 ± 3% turbostat.Avg_MHz
1.12 ± 7% -0.2 0.95 ± 11% turbostat.C3%
644225 ± 10% +20.2% 774089 ± 8% turbostat.C6
23.43 ± 9% +6.3 29.72 ± 7% turbostat.C6%
12.89 ± 10% +17.2% 15.10 ± 7% turbostat.CPU%c1
14.33 ± 6% +28.9% 18.47 ± 11% turbostat.CPU%c6
220.42 -4.7% 210.07 ± 2% turbostat.PkgWatt
1.206e+11 ± 7% -22.6% 9.329e+10 ± 9% perf-stat.branch-instructions
4.75 ± 5% +0.5 5.25 perf-stat.branch-miss-rate%
5.708e+09 -14.0% 4.908e+09 ± 10% perf-stat.branch-misses
1.676e+09 ± 19% -24.5% 1.266e+09 ± 4% perf-stat.cache-references
1.017e+12 ± 3% -16.3% 8.519e+11 ± 3% perf-stat.cpu-cycles
0.02 ± 3% +0.0 0.03 ± 5% perf-stat.dTLB-load-miss-rate%
1.196e+11 ± 5% -16.7% 9.964e+10 ± 5% perf-stat.dTLB-loads
0.01 ± 4% +0.0 0.01 ± 4% perf-stat.dTLB-store-miss-rate%
5273411 ± 5% +9.0% 5746770 ± 3% perf-stat.dTLB-store-misses
4.876e+10 ± 8% -21.0% 3.853e+10 ± 4% perf-stat.dTLB-stores
42.05 ± 2% -4.0 38.06 perf-stat.iTLB-load-miss-rate%
2.264e+08 ± 3% -31.9% 1.542e+08 ± 7% perf-stat.iTLB-load-misses
3.119e+08 -19.6% 2.508e+08 ± 6% perf-stat.iTLB-loads
6.53e+11 ± 6% -19.6% 5.253e+11 ± 5% perf-stat.instructions
13642943 ± 9% -14.0% 11738663 ± 4% perf-stat.node-load-misses
3805381 ± 34% -32.1% 2584140 ± 8% perf-stat.node-stores
30946 ± 6% -15.8% 26047 ± 7% interrupts.CPU0.LOC:Local_timer_interrupts
1326 ± 8% +22.0% 1618 ± 12% interrupts.CPU1.RES:Rescheduling_interrupts
271.50 ± 4% -9.8% 245.00 ± 5% interrupts.CPU12.RES:Rescheduling_interrupts
31204 ± 4% -13.8% 26901 ± 3% interrupts.CPU13.LOC:Local_timer_interrupts
407.50 ± 25% -41.7% 237.75 ± 17% interrupts.CPU13.RES:Rescheduling_interrupts
37.75 ± 61% +341.7% 166.75 ± 68% interrupts.CPU14.34:IR-PCI-MSI.1572865-edge.eth0-TxRx-0
527.00 ± 24% +45.3% 765.75 ± 13% interrupts.CPU2.RES:Rescheduling_interrupts
359.50 ± 14% +67.1% 600.75 ± 18% interrupts.CPU22.RES:Rescheduling_interrupts
350.50 ± 25% -42.4% 201.75 ± 3% interrupts.CPU27.RES:Rescheduling_interrupts
318.25 ± 23% +90.7% 606.75 ± 25% interrupts.CPU3.RES:Rescheduling_interrupts
6321 ± 7% +36.9% 8655 ± 4% interrupts.CPU30.CAL:Function_call_interrupts
3534 ± 13% +74.1% 6154 ± 6% interrupts.CPU30.TLB:TLB_shootdowns
287.75 ± 11% -28.7% 205.25 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts
326.75 ± 13% -26.9% 239.00 ± 16% interrupts.CPU32.RES:Rescheduling_interrupts
265.25 ± 7% -22.9% 204.50 ± 14% interrupts.CPU35.RES:Rescheduling_interrupts
6704 ± 13% +21.5% 8146 ± 7% interrupts.CPU36.CAL:Function_call_interrupts
3893 ± 20% +46.3% 5696 ± 8% interrupts.CPU36.TLB:TLB_shootdowns
6798 ± 18% +25.9% 8555 ± 5% interrupts.CPU37.CAL:Function_call_interrupts
273.00 ± 19% -30.1% 190.75 ± 11% interrupts.CPU37.RES:Rescheduling_interrupts
4075 ± 29% +48.7% 6058 ± 7% interrupts.CPU37.TLB:TLB_shootdowns
303.75 ± 12% -38.5% 186.75 ± 22% interrupts.CPU41.RES:Rescheduling_interrupts
6346 ± 9% +46.8% 9315 ± 12% interrupts.CPU42.CAL:Function_call_interrupts
3690 ± 15% +82.6% 6736 ± 20% interrupts.CPU42.TLB:TLB_shootdowns
255.75 ± 10% -37.8% 159.00 ± 12% interrupts.CPU43.RES:Rescheduling_interrupts
7579 ± 9% -25.8% 5620 ± 33% interrupts.CPU50.CAL:Function_call_interrupts
398.75 ± 47% -50.3% 198.00 ± 22% interrupts.CPU51.RES:Rescheduling_interrupts
31904 ± 5% -13.3% 27660 ± 9% interrupts.CPU52.LOC:Local_timer_interrupts
7487 ± 11% -19.2% 6048 ± 9% interrupts.CPU54.CAL:Function_call_interrupts
4747 ± 18% -26.8% 3474 ± 16% interrupts.CPU54.TLB:TLB_shootdowns
7486 ± 17% -30.7% 5187 ± 11% interrupts.CPU57.CAL:Function_call_interrupts
270.75 ± 11% -16.9% 225.00 ± 11% interrupts.CPU57.RES:Rescheduling_interrupts
4782 ± 28% -46.9% 2537 ± 31% interrupts.CPU57.TLB:TLB_shootdowns
238.25 ± 7% +12.7% 268.50 ± 5% interrupts.CPU63.RES:Rescheduling_interrupts
286.75 ± 10% -29.6% 202.00 ± 10% interrupts.CPU64.RES:Rescheduling_interrupts
290.25 ± 14% -41.3% 170.25 ± 29% interrupts.CPU65.RES:Rescheduling_interrupts
6570 ± 31% +30.0% 8538 ± 4% interrupts.CPU66.CAL:Function_call_interrupts
3829 ± 58% +60.1% 6130 ± 7% interrupts.CPU66.TLB:TLB_shootdowns
267.75 ± 18% -34.9% 174.25 ± 22% interrupts.CPU67.RES:Rescheduling_interrupts
257.00 ± 12% -30.4% 178.75 ± 20% interrupts.CPU68.RES:Rescheduling_interrupts
4377 ± 28% +47.4% 6450 ± 17% interrupts.CPU68.TLB:TLB_shootdowns
244.50 ± 17% -22.4% 189.75 interrupts.CPU69.RES:Rescheduling_interrupts
4263 ± 30% +41.9% 6050 ± 7% interrupts.CPU69.TLB:TLB_shootdowns
262.50 ± 17% -28.9% 186.75 ± 9% interrupts.CPU71.RES:Rescheduling_interrupts
6230 ± 10% +43.2% 8922 ± 12% interrupts.CPU72.CAL:Function_call_interrupts
30763 ± 6% -11.1% 27340 ± 7% interrupts.CPU72.LOC:Local_timer_interrupts
3562 ± 18% +84.8% 6584 ± 19% interrupts.CPU72.TLB:TLB_shootdowns
259.25 ± 2% -19.7% 208.25 ± 10% interrupts.CPU74.RES:Rescheduling_interrupts
237.75 ± 21% -24.5% 179.50 ± 11% interrupts.CPU76.RES:Rescheduling_interrupts
297.25 ± 29% -40.2% 177.75 ± 9% interrupts.CPU79.RES:Rescheduling_interrupts
271.75 ± 14% -33.0% 182.00 ± 10% interrupts.CPU80.RES:Rescheduling_interrupts
221.25 ± 18% -23.1% 170.25 ± 13% interrupts.CPU81.RES:Rescheduling_interrupts
323.75 ± 25% -34.7% 211.50 ± 9% interrupts.CPU86.RES:Rescheduling_interrupts
308.00 ± 17% -26.2% 227.25 ± 31% interrupts.CPU87.RES:Rescheduling_interrupts
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
View attachment "config-4.16.0-rc2-00007-g2c83362" of type "text/plain" (165857 bytes)
View attachment "job-script" of type "text/plain" (7081 bytes)
View attachment "job.yaml" of type "text/plain" (4538 bytes)
View attachment "reproduce" of type "text/plain" (16157 bytes)
Powered by blists - more mailing lists