[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160531081515.GC11635@yexl-desktop>
Date: Tue, 31 May 2016 16:15:15 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: Al Viro <viro@...iv.linux.org.uk>
Cc: LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>, lkp@...org
Subject: [lkp] [dcache_{readdir,dir_lseek}() users] 4e82901cd6:
reaim.jobs_per_min -49.1% regression
FYI, we noticed reaim.jobs_per_min -49.1% regression due to commit:
commit 4e82901cd6d1af21ae232ae835c36d8230c809e8 ("dcache_{readdir,dir_lseek}() users: switch to ->iterate_shared")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: reaim
on test machine: lkp-hsx04: 144 threads Brickland Haswell-EX with 512G memory
with following parameters: cpufreq_governor=performance/iterations=4/nr_task=1600%/test=fserver
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/iterations/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/4/x86_64-rhel/1600%/debian-x86_64-2015-02-07.cgz/lkp-hsx04/fserver/reaim
commit:
3125d2650cae97d8f313ab696cd0ed66916e767a
4e82901cd6d1af21ae232ae835c36d8230c809e8
3125d2650cae97d8 4e82901cd6d1af21ae232ae835
---------------- --------------------------
%stddev %change %stddev
\ | \
189504 ± 5% -49.1% 96506 ± 1% reaim.jobs_per_min
82.25 ± 5% -49.1% 41.89 ± 1% reaim.jobs_per_min_child
486.97 ± 6% +2540.6% 12859 ± 1% reaim.child_systime
377.56 ± 0% +14.0% 430.41 ± 0% reaim.child_utime
66.25 ± 0% +32.4% 87.69 ± 1% reaim.jti
191210 ± 5% -48.4% 98588 ± 2% reaim.max_jobs_per_min
73.90 ± 5% +95.9% 144.75 ± 1% reaim.parent_time
33.26 ± 1% -64.8% 11.71 ± 10% reaim.std_dev_percent
18.74 ± 4% -17.6% 15.44 ± 10% reaim.std_dev_time
304.45 ± 5% +93.1% 587.85 ± 1% reaim.time.elapsed_time
304.45 ± 5% +93.1% 587.85 ± 1% reaim.time.elapsed_time.max
1049052 ± 0% +44.3% 1513524 ± 1% reaim.time.involuntary_context_switches
1.766e+08 ± 0% +1.1% 1.784e+08 ± 0% reaim.time.minor_page_faults
1137 ± 1% +695.0% 9043 ± 1% reaim.time.percent_of_cpu_this_job_got
1950 ± 6% +2537.6% 51439 ± 1% reaim.time.system_time
1510 ± 0% +14.0% 1721 ± 0% reaim.time.user_time
7816334 ± 2% -26.5% 5744712 ± 3% reaim.time.voluntary_context_switches
358.57 ± 4% +78.9% 641.41 ± 1% uptime.boot
47094 ± 4% -55.0% 21202 ± 5% uptime.idle
632750 ± 7% +583.2% 4323189 ± 6% softirqs.RCU
573650 ± 0% +79.8% 1031264 ± 0% softirqs.SCHED
3249858 ± 1% +813.2% 29677185 ± 1% softirqs.TIMER
941405 ± 0% +33.9% 1260181 ± 1% vmstat.memory.cache
1.50 ± 33% +25466.7% 383.50 ± 12% vmstat.procs.b
11.75 ± 9% +1551.1% 194.00 ± 3% vmstat.procs.r
66096 ± 6% -59.7% 26611 ± 2% vmstat.system.cs
13790 ± 2% +679.2% 107457 ± 2% vmstat.system.in
304.45 ± 5% +93.1% 587.85 ± 1% time.elapsed_time
304.45 ± 5% +93.1% 587.85 ± 1% time.elapsed_time.max
1049052 ± 0% +44.3% 1513524 ± 1% time.involuntary_context_switches
1137 ± 1% +695.0% 9043 ± 1% time.percent_of_cpu_this_job_got
1950 ± 6% +2537.6% 51439 ± 1% time.system_time
1510 ± 0% +14.0% 1721 ± 0% time.user_time
7816334 ± 2% -26.5% 5744712 ± 3% time.voluntary_context_switches
5.521e+08 ± 2% +128.2% 1.26e+09 ± 9% cpuidle.C1-HSW.time
1119526 ± 12% +39.4% 1560698 ± 8% cpuidle.C1-HSW.usage
1.473e+09 ± 3% -26.0% 1.09e+09 ± 13% cpuidle.C1E-HSW.time
3341952 ± 0% -81.1% 631688 ± 8% cpuidle.C1E-HSW.usage
3.215e+08 ± 9% +626.5% 2.336e+09 ± 9% cpuidle.C3-HSW.time
400710 ± 3% +123.0% 893484 ± 5% cpuidle.C3-HSW.usage
3.797e+10 ± 5% -38.3% 2.342e+10 ± 3% cpuidle.C6-HSW.time
4992473 ± 0% -38.2% 3086799 ± 4% cpuidle.C6-HSW.usage
86345226 ± 17% +246.7% 2.993e+08 ± 9% cpuidle.POLL.time
680.00 ± 6% +441.2% 3680 ± 11% cpuidle.POLL.usage
8.28 ± 1% +707.7% 66.86 ± 1% turbostat.%Busy
241.50 ± 1% +700.9% 1934 ± 1% turbostat.Avg_MHz
19.38 ± 2% +25.8% 24.39 ± 1% turbostat.CPU%c1
0.90 ± 8% +86.7% 1.68 ± 7% turbostat.CPU%c3
71.44 ± 0% -90.1% 7.07 ± 4% turbostat.CPU%c6
40.50 ± 2% +30.2% 52.75 ± 3% turbostat.CoreTmp
6.20 ± 3% -87.9% 0.75 ± 13% turbostat.Pkg%pc2
44.75 ± 1% +25.7% 56.25 ± 1% turbostat.PkgTmp
264.66 ± 0% +83.6% 485.92 ± 0% turbostat.PkgWatt
235.00 ± 0% +2.3% 240.47 ± 0% turbostat.RAMWatt
679219 ± 0% +30.0% 883283 ± 1% meminfo.Active
576849 ± 0% +35.3% 780525 ± 2% meminfo.Active(anon)
562337 ± 0% +17.2% 659238 ± 1% meminfo.AnonPages
510249 ± 0% +19.7% 610864 ± 1% meminfo.Cached
2279043 ± 0% +22.0% 2780860 ± 3% meminfo.Committed_AS
10600 ± 0% +16.4% 12336 ± 3% meminfo.Inactive(anon)
57088 ± 0% +256.1% 203274 ± 5% meminfo.KernelStack
18972 ± 0% +9.3% 20734 ± 4% meminfo.Mapped
109131 ± 5% +30.5% 142400 ± 3% meminfo.PageTables
84481 ± 0% +25.7% 106227 ± 1% meminfo.SReclaimable
346606 ± 0% +57.4% 545507 ± 3% meminfo.SUnreclaim
25419 ± 1% +395.4% 125929 ± 9% meminfo.Shmem
431088 ± 0% +51.2% 651735 ± 2% meminfo.Slab
25.10 ± 4% -88.1% 2.98 ± 13% perf-profile.cycles-pp.call_cpuidle
26.12 ± 4% -88.4% 3.02 ± 13% perf-profile.cycles-pp.cpu_startup_entry
25.10 ± 4% -88.1% 2.98 ± 13% perf-profile.cycles-pp.cpuidle_enter
23.90 ± 4% -87.7% 2.95 ± 13% perf-profile.cycles-pp.cpuidle_enter_state
23.59 ± 3% -87.8% 2.87 ± 13% perf-profile.cycles-pp.intel_idle
1.10 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.reschedule_interrupt
1.38 ± 6% -87.1% 0.18 ± 49% perf-profile.cycles-pp.rest_init
1.10 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.scheduler_ipi
1.10 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smp_reschedule_interrupt
1.38 ± 6% -87.1% 0.18 ± 49% perf-profile.cycles-pp.start_kernel
24.80 ± 4% -88.5% 2.84 ± 11% perf-profile.cycles-pp.start_secondary
1.38 ± 6% -87.1% 0.18 ± 49% perf-profile.cycles-pp.x86_64_start_kernel
1.38 ± 6% -87.1% 0.18 ± 49% perf-profile.cycles-pp.x86_64_start_reservations
144401 ± 0% +35.4% 195475 ± 1% proc-vmstat.nr_active_anon
140772 ± 0% +17.3% 165063 ± 1% proc-vmstat.nr_anon_pages
127559 ± 0% +19.8% 152794 ± 1% proc-vmstat.nr_file_pages
2649 ± 0% +16.2% 3079 ± 3% proc-vmstat.nr_inactive_anon
0.00 ± 0% +Inf% 427.25 ± 5% proc-vmstat.nr_isolated_anon
3574 ± 0% +254.6% 12675 ± 4% proc-vmstat.nr_kernel_stack
27385 ± 6% +30.0% 35613 ± 3% proc-vmstat.nr_page_table_pages
6352 ± 1% +396.9% 31560 ± 9% proc-vmstat.nr_shmem
21119 ± 0% +25.8% 26563 ± 1% proc-vmstat.nr_slab_reclaimable
86645 ± 0% +57.2% 136181 ± 3% proc-vmstat.nr_slab_unreclaimable
18704 ± 91% +6342.3% 1204981 ± 1% proc-vmstat.numa_hint_faults
7213 ±107% +7548.2% 551723 ± 2% proc-vmstat.numa_hint_faults_local
41554 ± 1% +13.0% 46949 ± 3% proc-vmstat.numa_other
34.75 ± 25% +9.5e+05% 330841 ± 4% proc-vmstat.numa_pages_migrated
75.00 ± 0% +2.4e+06% 1806450 ± 1% proc-vmstat.numa_pte_updates
6980 ± 3% +286.5% 26983 ± 14% proc-vmstat.pgactivate
9.75 ± 46% +29151.3% 2852 ± 6% proc-vmstat.pgmigrate_fail
34.75 ± 25% +9.5e+05% 330841 ± 4% proc-vmstat.pgmigrate_success
30962 ± 2% +49.7% 46340 ± 22% numa-vmstat.node0.nr_file_pages
1336 ± 26% +176.5% 3696 ± 17% numa-vmstat.node0.nr_kernel_stack
655.25 ±138% +2346.8% 16032 ± 64% numa-vmstat.node0.nr_shmem
4444 ± 4% +60.6% 7138 ± 4% numa-vmstat.node0.nr_slab_reclaimable
24105 ± 13% +52.7% 36818 ± 14% numa-vmstat.node0.nr_slab_unreclaimable
37828 ± 8% +20.4% 45545 ± 10% numa-vmstat.node1.nr_active_anon
0.00 ± 0% +Inf% 104.75 ± 4% numa-vmstat.node1.nr_isolated_anon
777.00 ± 1% +263.0% 2820 ± 7% numa-vmstat.node1.nr_kernel_stack
21035 ± 3% +49.4% 31432 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
19261584 ± 3% -8.3% 17664368 ± 6% numa-vmstat.node1.numa_local
6445 ± 5% -6.9% 6000 ± 2% numa-vmstat.node2.nr_active_file
731.75 ± 54% +274.9% 2743 ± 11% numa-vmstat.node2.nr_kernel_stack
5668 ± 9% +21.7% 6898 ± 6% numa-vmstat.node2.nr_slab_reclaimable
20284 ± 17% +52.0% 30833 ± 11% numa-vmstat.node2.nr_slab_unreclaimable
32582 ± 11% +30.8% 42629 ± 7% numa-vmstat.node3.nr_active_anon
32573 ± 11% +28.2% 41746 ± 7% numa-vmstat.node3.nr_anon_pages
0.00 ± 0% +Inf% 107.25 ± 10% numa-vmstat.node3.nr_isolated_anon
722.75 ± 31% +373.0% 3418 ± 18% numa-vmstat.node3.nr_kernel_stack
802.50 ± 3% +56.5% 1255 ± 51% numa-vmstat.node3.nr_mapped
269.25 ± 31% +265.0% 982.75 ± 84% numa-vmstat.node3.nr_shmem
4816 ± 17% +34.6% 6484 ± 8% numa-vmstat.node3.nr_slab_reclaimable
21220 ± 11% +75.8% 37314 ± 12% numa-vmstat.node3.nr_slab_unreclaimable
123852 ± 2% +49.7% 185420 ± 22% numa-meminfo.node0.FilePages
21489 ± 26% +174.9% 59065 ± 17% numa-meminfo.node0.KernelStack
861898 ± 3% +30.0% 1120062 ± 9% numa-meminfo.node0.MemUsed
17776 ± 4% +60.6% 28540 ± 4% numa-meminfo.node0.SReclaimable
96387 ± 13% +52.6% 147124 ± 14% numa-meminfo.node0.SUnreclaim
2622 ±138% +2347.4% 64189 ± 64% numa-meminfo.node0.Shmem
114164 ± 11% +53.9% 175665 ± 12% numa-meminfo.node0.Slab
175655 ± 6% +19.3% 209563 ± 9% numa-meminfo.node1.Active
151199 ± 8% +20.9% 182816 ± 10% numa-meminfo.node1.Active(anon)
12414 ± 2% +263.7% 45153 ± 6% numa-meminfo.node1.KernelStack
84163 ± 3% +49.4% 125761 ± 3% numa-meminfo.node1.SUnreclaim
108941 ± 6% +37.7% 149995 ± 3% numa-meminfo.node1.Slab
25784 ± 5% -6.9% 24004 ± 2% numa-meminfo.node2.Active(file)
11770 ± 54% +272.3% 43819 ± 10% numa-meminfo.node2.KernelStack
22674 ± 9% +21.7% 27592 ± 6% numa-meminfo.node2.SReclaimable
81132 ± 17% +51.8% 123123 ± 11% numa-meminfo.node2.SUnreclaim
103806 ± 14% +45.2% 150716 ± 9% numa-meminfo.node2.Slab
155823 ± 9% +26.8% 197624 ± 6% numa-meminfo.node3.Active
130231 ± 11% +31.2% 170866 ± 7% numa-meminfo.node3.Active(anon)
130132 ± 11% +28.6% 167346 ± 7% numa-meminfo.node3.AnonPages
11486 ± 32% +376.1% 54692 ± 18% numa-meminfo.node3.KernelStack
3220 ± 3% +56.7% 5046 ± 51% numa-meminfo.node3.Mapped
731454 ± 4% +23.8% 905502 ± 8% numa-meminfo.node3.MemUsed
19265 ± 17% +34.6% 25932 ± 8% numa-meminfo.node3.SReclaimable
84857 ± 11% +75.8% 149168 ± 11% numa-meminfo.node3.SUnreclaim
1072 ± 31% +267.3% 3939 ± 84% numa-meminfo.node3.Shmem
104123 ± 11% +68.2% 175101 ± 10% numa-meminfo.node3.Slab
2655 ± 0% +12.3% 2980 ± 1% slabinfo.anon_vma_chain.active_slabs
169938 ± 0% +12.3% 190786 ± 1% slabinfo.anon_vma_chain.num_objs
2655 ± 0% +12.3% 2980 ± 1% slabinfo.anon_vma_chain.num_slabs
57371 ± 2% +51.4% 86867 ± 2% slabinfo.cred_jar.active_objs
1375 ± 2% +59.0% 2187 ± 3% slabinfo.cred_jar.active_slabs
57785 ± 2% +59.0% 91883 ± 3% slabinfo.cred_jar.num_objs
1375 ± 2% +59.0% 2187 ± 3% slabinfo.cred_jar.num_slabs
105760 ± 0% +70.2% 180007 ± 2% slabinfo.dentry.active_objs
2611 ± 0% +65.9% 4334 ± 2% slabinfo.dentry.active_slabs
109710 ± 0% +65.9% 182048 ± 2% slabinfo.dentry.num_objs
2611 ± 0% +65.9% 4334 ± 2% slabinfo.dentry.num_slabs
12542 ± 0% +19.8% 15025 ± 2% slabinfo.files_cache.active_objs
12542 ± 0% +19.8% 15025 ± 2% slabinfo.files_cache.num_objs
12325 ± 0% +13.1% 13944 ± 3% slabinfo.kmalloc-128.active_objs
12368 ± 0% +12.8% 13949 ± 3% slabinfo.kmalloc-128.num_objs
48607 ± 1% +599.8% 340166 ± 7% slabinfo.kmalloc-256.active_objs
1406 ± 0% +296.1% 5569 ± 7% slabinfo.kmalloc-256.active_slabs
90025 ± 0% +295.9% 356448 ± 7% slabinfo.kmalloc-256.num_objs
1406 ± 0% +296.1% 5569 ± 7% slabinfo.kmalloc-256.num_slabs
19374 ± 4% +32.1% 25598 ± 11% slabinfo.kmalloc-512.active_objs
303.50 ± 4% +31.9% 400.25 ± 11% slabinfo.kmalloc-512.active_slabs
19466 ± 4% +31.7% 25645 ± 11% slabinfo.kmalloc-512.num_objs
303.50 ± 4% +31.9% 400.25 ± 11% slabinfo.kmalloc-512.num_slabs
54555 ± 0% +11.0% 60529 ± 1% slabinfo.kmalloc-64.active_objs
852.00 ± 0% +10.9% 945.25 ± 1% slabinfo.kmalloc-64.active_slabs
54569 ± 0% +10.9% 60543 ± 1% slabinfo.kmalloc-64.num_objs
852.00 ± 0% +10.9% 945.25 ± 1% slabinfo.kmalloc-64.num_slabs
8353 ± 2% +16.1% 9697 ± 1% slabinfo.mm_struct.active_objs
8767 ± 2% +12.5% 9867 ± 0% slabinfo.mm_struct.num_objs
18102 ± 1% +321.6% 76312 ± 2% slabinfo.pid.active_objs
282.25 ± 1% +324.1% 1197 ± 2% slabinfo.pid.active_slabs
18102 ± 1% +323.3% 76624 ± 2% slabinfo.pid.num_objs
282.25 ± 1% +324.1% 1197 ± 2% slabinfo.pid.num_slabs
15888 ± 0% +88.2% 29907 ± 5% slabinfo.radix_tree_node.active_objs
283.00 ± 0% +88.5% 533.50 ± 5% slabinfo.radix_tree_node.active_slabs
15888 ± 0% +88.3% 29912 ± 5% slabinfo.radix_tree_node.num_objs
283.00 ± 0% +88.5% 533.50 ± 5% slabinfo.radix_tree_node.num_slabs
37875 ± 1% +18.7% 44946 ± 2% slabinfo.shmem_inode_cache.active_objs
782.75 ± 1% +19.0% 931.25 ± 2% slabinfo.shmem_inode_cache.active_slabs
38374 ± 1% +19.0% 45658 ± 2% slabinfo.shmem_inode_cache.num_objs
782.75 ± 1% +19.0% 931.25 ± 2% slabinfo.shmem_inode_cache.num_slabs
9093 ± 2% +19.5% 10863 ± 0% slabinfo.sighand_cache.active_objs
9517 ± 1% +15.7% 11011 ± 0% slabinfo.sighand_cache.num_objs
15657 ± 4% +22.7% 19216 ± 2% slabinfo.signal_cache.active_objs
532.75 ± 3% +24.5% 663.25 ± 2% slabinfo.signal_cache.active_slabs
15994 ± 3% +24.4% 19903 ± 2% slabinfo.signal_cache.num_objs
532.75 ± 3% +24.5% 663.25 ± 2% slabinfo.signal_cache.num_slabs
4059 ± 0% +220.4% 13004 ± 4% slabinfo.task_struct.active_objs
1416 ± 0% +210.5% 4397 ± 4% slabinfo.task_struct.active_slabs
4249 ± 0% +210.5% 13193 ± 4% slabinfo.task_struct.num_objs
1416 ± 0% +210.5% 4397 ± 4% slabinfo.task_struct.num_slabs
9689 ± 1% -14.4% 8298 ± 0% slabinfo.tw_sock_TCP.active_objs
9689 ± 1% -14.4% 8298 ± 0% slabinfo.tw_sock_TCP.num_objs
118717 ± 0% +8.7% 129087 ± 2% slabinfo.vm_area_struct.active_objs
2834 ± 0% +11.0% 3147 ± 1% slabinfo.vm_area_struct.active_slabs
124715 ± 0% +11.0% 138490 ± 1% slabinfo.vm_area_struct.num_objs
2834 ± 0% +11.0% 3147 ± 1% slabinfo.vm_area_struct.num_slabs
826.91 ±172% +6.4e+05% 5282284 ± 23% sched_debug.cfs_rq:/.MIN_vruntime.avg
119075 ±172% +29123.4% 34797824 ± 5% sched_debug.cfs_rq:/.MIN_vruntime.max
9888 ±172% +1.1e+05% 10423763 ± 12% sched_debug.cfs_rq:/.MIN_vruntime.stddev
3.46 ± 9% +6452.2% 226.68 ± 4% sched_debug.cfs_rq:/.load.avg
76.63 ± 15% +13122.9% 10133 ± 58% sched_debug.cfs_rq:/.load.max
13.44 ± 12% +7589.8% 1033 ± 43% sched_debug.cfs_rq:/.load.stddev
10.02 ± 19% +2131.5% 223.68 ± 4% sched_debug.cfs_rq:/.load_avg.avg
192.42 ± 15% +4769.4% 9369 ± 59% sched_debug.cfs_rq:/.load_avg.max
31.87 ± 17% +2895.1% 954.48 ± 42% sched_debug.cfs_rq:/.load_avg.stddev
826.91 ±172% +6.4e+05% 5282285 ± 23% sched_debug.cfs_rq:/.max_vruntime.avg
119075 ±172% +29123.4% 34797824 ± 5% sched_debug.cfs_rq:/.max_vruntime.max
9888 ±172% +1.1e+05% 10423763 ± 12% sched_debug.cfs_rq:/.max_vruntime.stddev
867397 ± 4% +3259.7% 29141996 ± 5% sched_debug.cfs_rq:/.min_vruntime.avg
1522256 ± 3% +2296.5% 36481296 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
317110 ± 9% +6979.6% 22450187 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
466641 ± 2% +598.1% 3257731 ± 10% sched_debug.cfs_rq:/.min_vruntime.stddev
0.07 ± 12% +926.9% 0.72 ± 4% sched_debug.cfs_rq:/.nr_running.avg
1.00 ± 0% +120.5% 2.20 ± 16% sched_debug.cfs_rq:/.nr_running.max
0.24 ± 5% +148.7% 0.60 ± 8% sched_debug.cfs_rq:/.nr_running.stddev
1.57 ± 4% +12191.7% 193.17 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.avg
44.80 ± 18% +17097.4% 7704 ± 50% sched_debug.cfs_rq:/.runnable_load_avg.max
6.83 ± 10% +11845.7% 816.26 ± 34% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-561803 ±-17% +909.6% -5671743 ±-17% sched_debug.cfs_rq:/.spread0.avg
93073 ± 48% +1693.9% 1669627 ± 33% sched_debug.cfs_rq:/.spread0.max
-1112123 ± -9% +1011.8% -12365037 ±-14% sched_debug.cfs_rq:/.spread0.min
466665 ± 2% +598.4% 3259347 ± 10% sched_debug.cfs_rq:/.spread0.stddev
72.03 ± 8% +681.0% 562.57 ± 4% sched_debug.cfs_rq:/.util_avg.avg
845.49 ± 7% +19.8% 1012 ± 1% sched_debug.cfs_rq:/.util_avg.max
162.56 ± 7% +141.2% 392.17 ± 2% sched_debug.cfs_rq:/.util_avg.stddev
1041175 ± 4% +74.3% 1814551 ± 41% sched_debug.cpu.avg_idle.max
190602 ± 7% +74.9% 333448 ± 4% sched_debug.cpu.clock.avg
190622 ± 7% +74.9% 333489 ± 4% sched_debug.cpu.clock.max
190578 ± 7% +74.9% 333385 ± 4% sched_debug.cpu.clock.min
12.27 ± 11% +123.7% 27.45 ± 17% sched_debug.cpu.clock.stddev
190602 ± 7% +74.9% 333448 ± 4% sched_debug.cpu.clock_task.avg
190622 ± 7% +74.9% 333489 ± 4% sched_debug.cpu.clock_task.max
190578 ± 7% +74.9% 333385 ± 4% sched_debug.cpu.clock_task.min
12.27 ± 11% +123.7% 27.45 ± 17% sched_debug.cpu.clock_task.stddev
2.48 ± 30% +7940.8% 199.54 ± 8% sched_debug.cpu.cpu_load[0].avg
151.10 ± 73% +5500.9% 8462 ± 53% sched_debug.cpu.cpu_load[0].max
16.01 ± 55% +5396.2% 879.71 ± 37% sched_debug.cpu.cpu_load[0].stddev
2.30 ± 25% +8559.3% 198.83 ± 8% sched_debug.cpu.cpu_load[1].avg
103.62 ± 61% +7965.2% 8357 ± 53% sched_debug.cpu.cpu_load[1].max
12.13 ± 42% +7072.1% 869.95 ± 37% sched_debug.cpu.cpu_load[1].stddev
2.17 ± 20% +8999.8% 197.88 ± 9% sched_debug.cpu.cpu_load[2].avg
79.29 ± 49% +10256.5% 8211 ± 52% sched_debug.cpu.cpu_load[2].max
10.29 ± 32% +8233.9% 857.48 ± 36% sched_debug.cpu.cpu_load[2].stddev
2.09 ± 16% +9348.3% 197.64 ± 9% sched_debug.cpu.cpu_load[3].avg
69.15 ± 34% +11736.4% 8184 ± 52% sched_debug.cpu.cpu_load[3].max
9.19 ± 22% +9151.1% 850.06 ± 36% sched_debug.cpu.cpu_load[3].stddev
2.05 ± 14% +9635.5% 199.62 ± 8% sched_debug.cpu.cpu_load[4].avg
57.27 ± 25% +14613.6% 8427 ± 54% sched_debug.cpu.cpu_load[4].max
8.27 ± 17% +10402.4% 868.48 ± 38% sched_debug.cpu.cpu_load[4].stddev
3419 ± 20% +1048.9% 39291 ± 7% sched_debug.cpu.curr->pid.avg
13248 ± 10% +120.8% 29253 ± 3% sched_debug.cpu.curr->pid.stddev
4.14 ± 23% +5375.2% 226.50 ± 4% sched_debug.cpu.load.avg
166.82 ± 60% +5974.4% 10133 ± 58% sched_debug.cpu.load.max
20.66 ± 42% +4903.2% 1033 ± 43% sched_debug.cpu.load.stddev
519931 ± 4% +88.8% 981576 ± 44% sched_debug.cpu.max_idle_balance_cost.max
1655 ±112% +3127.4% 53420 ± 62% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 6% +257.7% 0.00 ± 14% sched_debug.cpu.next_balance.stddev
57551 ± 8% +292.1% 225655 ± 4% sched_debug.cpu.nr_load_updates.avg
74049 ± 6% +245.3% 255674 ± 4% sched_debug.cpu.nr_load_updates.max
42700 ± 6% +360.8% 196777 ± 2% sched_debug.cpu.nr_load_updates.min
10498 ± 5% +45.5% 15274 ± 7% sched_debug.cpu.nr_load_updates.stddev
0.07 ± 16% +1759.2% 1.37 ± 27% sched_debug.cpu.nr_running.avg
1.28 ± 25% +927.9% 13.19 ± 25% sched_debug.cpu.nr_running.max
0.26 ± 10% +629.1% 1.88 ± 24% sched_debug.cpu.nr_running.stddev
61209 ± 4% -22.7% 47333 ± 4% sched_debug.cpu.nr_switches.avg
33110 ± 3% -55.3% 14805 ± 6% sched_debug.cpu.nr_switches.stddev
9.09 ± 6% +36.1% 12.36 ± 4% sched_debug.cpu.nr_uninterruptible.avg
534.05 ± 7% +189.9% 1548 ± 8% sched_debug.cpu.nr_uninterruptible.max
-503.79 ± -4% +246.5% -1745 ± -5% sched_debug.cpu.nr_uninterruptible.min
392.51 ± 9% +124.5% 881.29 ± 4% sched_debug.cpu.nr_uninterruptible.stddev
190581 ± 7% +74.9% 333386 ± 4% sched_debug.cpu_clk
188297 ± 7% +75.8% 331102 ± 4% sched_debug.ktime
0.12 ± 7% -34.3% 0.08 ± 7% sched_debug.rt_rq:/.rt_time.avg
9.29 ± 7% -45.7% 5.04 ± 21% sched_debug.rt_rq:/.rt_time.max
0.93 ± 7% -41.8% 0.54 ± 16% sched_debug.rt_rq:/.rt_time.stddev
190581 ± 7% +74.9% 333386 ± 4% sched_debug.sched_clk
reaim.child_systime
14000 ++------------------------------------------------------------------+
| O O O OO O O
12000 ++ |
| O |
10000 O+O O O O O |
| OO O O OO O O OO O O O OO O O O OO O O O |
8000 ++ |
| |
6000 ++ |
| |
4000 ++ |
| |
2000 ++ |
*.*.**.*.*.*.**.*.*.**.*.*.*.**.*.*.*.**.*.*.*.**.*.*.**. .*. .* |
0 ++-----------------------O-------------------------------*-O-*--*---+
reaim.child_utime
1000 O+--OO---O-O---------------------------------------------------------+
900 ++O O |
| |
800 ++ |
700 ++ |
| |
600 ++ |
500 ++ |
400 ++ O OO O O O O O O O O OO O O O OO O O O OO O O O OO O O
*.*.**.*.*.*.*.**.*.*.*.**.*.*.*.*.**.*.*.*.**.*.*.*.**.*.*.*.*.** |
300 ++ |
200 ++ |
| |
100 ++ |
0 ++-----------------------O----------------------------------O--------+
reaim.std_dev_percent
40 ++*---*--*---*-*---*--*-*-*---*---*----*---*---------------------------+
* * * * * * * * ** * *. .*.*. |
35 ++ *.**.* *.*.*.**.* |
30 ++ |
| |
25 ++ |
| |
20 ++ |
| |
15 ++ O |
10 ++ O O O O O O O OO O O
O O OO O O O O O O O O O O O O O O OO O O O O |
5 ++O |
| |
0 ++------------------------O-----------------------------------O--------+
reaim.jti
100 ++--------------------------------------------------------------------+
90 O+O O OO O O OO O O OO O O O O O O O O OO O O |
| O O O O O O O O OO O O
80 ++ |
70 ++ |
*.*.*. *.*.*. .* .*. .*.**.*. .*.*.*.* .*.*.*.*.**.*.*.*.*.**.* |
60 ++ * *.* * *.* * * |
50 ++ |
40 ++ |
| |
30 ++ |
20 ++ |
| |
10 ++ |
0 ++------------------------O----------------------------------O--------+
reaim.time.user_time
4000 O+--OO---O-O---------------------------------------------------------+
| O O |
3500 ++ |
3000 ++ |
| |
2500 ++ |
| |
2000 ++ |
| O OO O O O O O O O O OO O O O OO O O O OO O O O OO O O
1500 *+*.**.*.*.*.*.**.*.*.*.**.*.*.*.*.**.*.*.*.**.*.*.*.**.*.*.*.*.** |
1000 ++ |
| |
500 ++ |
| |
0 ++-----------------------O----------------------------------O--------+
reaim.time.system_time
60000 ++------------------------------------------------------------------+
| |
50000 ++ O O O OO O O
| |
O O O |
40000 ++O O O O O O O O O O |
| O O O O O OO O O O O O O O O O O |
30000 ++ |
| |
20000 ++ |
| |
| |
10000 ++ |
*.*.**.*.*.*.**.*.*.**.*.*.*.**.*.*.*.**.*.*.*.**.*.*.* |
0 ++-----------------------O-----------------------------*-*-O-*-**---+
reaim.time.percent_of_cpu_this_job_got
10000 ++--O----O-O--------------------------------------------------------+
9000 O+O O O O O O O O O
| O |
8000 ++ O O O |
7000 ++ O O O OO O O OO O O OO O O OO O O O |
| |
6000 ++ |
5000 ++ |
4000 ++ |
| |
3000 ++ |
2000 ++ |
*.*.**.*.*.*.**.*.*.**.*.*.*.**.*.*.*.**.*.*. .*.**.*.*.*.** |
1000 ++ *.**.* |
0 ++-----------------------O---------------------------------O--------+
reaim.time.involuntary_context_switches
1.6e+06 ++----------------------------------------------------------------+
| O O O O OO O
1.4e+06 O+OO O O O OO O O O OO O O O O OO O O OO O |
| OO O O O O |
1.2e+06 ++ .*. .**.*.*. *. .*.**. *. .*. *.*.*.* |
1e+06 *+** * * * *.*.* *.**.* * *.*.*.**.*.*.* |
| |
800000 ++ |
| |
600000 ++ |
400000 ++ |
| |
200000 ++ |
| |
0 ++----------------------O--------------------------------O--------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
View attachment "config-4.6.0-rc3-00035-g4e82901" of type "text/plain" (146965 bytes)
View attachment "job.yaml" of type "text/plain" (3660 bytes)
View attachment "reproduce" of type "text/plain" (13701 bytes)
Powered by blists - more mailing lists