[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h9dchzor.fsf@yhuang-dev.intel.com>
Date: Thu, 02 Jun 2016 14:28:36 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: "Huang\, Ying" <ying.huang@...el.com>
Cc: Al Viro <viro@...IV.linux.org.uk>,
Linus Torvalds <torvalds@...ux-foundation.org>, <lkp@...org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [LKP] [lkp] [dcache_{readdir, dir_lseek}() users] 4e82901cd6: reaim.jobs_per_min -49.1% regression
"Huang, Ying" <ying.huang@...el.com> writes:
> Al Viro <viro@...IV.linux.org.uk> writes:
>
>> On Tue, May 31, 2016 at 04:15:15PM +0800, kernel test robot wrote:
>>>
>>>
>>> FYI, we noticed reaim.jobs_per_min -49.1% regression due to commit:
>>>
>>> commit 4e82901cd6d1af21ae232ae835c36d8230c809e8 ("dcache_{readdir,dir_lseek}() users: switch to ->iterate_shared")
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>>>
>>> in testcase: reaim
>>> on test machine: lkp-hsx04: 144 threads Brickland Haswell-EX with 512G memory
>>> with following parameters: cpufreq_governor=performance/iterations=4/nr_task=1600%/test=fserver
>>
>> [snip]
>>
>> Is there any way to get the profiles?
>
> Sorry, our perf-profile support is broken after upgrading perf-profile
> recently. We will restore it ASAP and send back to you the perf profile
> results.
Here is the comparison result with perf profile information. You can
find it via searching 'perf-profile'.
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_task/iterations/test:
lkp-hsx04/reaim/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1600%/4/fserver
commit:
3125d2650cae97d8f313ab696cd0ed66916e767a
4e82901cd6d1af21ae232ae835c36d8230c809e8
3125d2650cae97d8 4e82901cd6d1af21ae232ae835
---------------- --------------------------
%stddev %change %stddev
\ | \
479.79 ± 3% +2515.6% 12549 ± 2% reaim.child_systime
379.45 ± 0% +13.1% 429.26 ± 0% reaim.child_utime
176186 ± 10% -44.8% 97342 ± 1% reaim.jobs_per_min
76.47 ± 10% -44.8% 42.25 ± 1% reaim.jobs_per_min_child
71.15 ± 5% +22.7% 87.33 ± 0% reaim.jti
178407 ± 10% -44.1% 99667 ± 2% reaim.max_jobs_per_min
80.07 ± 9% +79.3% 143.55 ± 1% reaim.parent_time
28.40 ± 14% -57.1% 12.19 ± 6% reaim.std_dev_percent
17.87 ± 0% -10.7% 15.95 ± 4% reaim.std_dev_time
352.92 ± 0% +23.9% 437.31 ± 1% reaim.time.elapsed_time
352.92 ± 0% +23.9% 437.31 ± 1% reaim.time.elapsed_time.max
1.895e+08 ± 10% -30.0% 1.327e+08 ± 0% reaim.time.minor_page_faults
1052 ± 7% +746.1% 8903 ± 1% reaim.time.percent_of_cpu_this_job_got
2073 ± 7% +1715.8% 37650 ± 2% reaim.time.system_time
1644 ± 10% -21.7% 1287 ± 0% reaim.time.user_time
8298349 ± 13% -48.2% 4296696 ± 5% reaim.time.voluntary_context_switches
404.21 ± 1% +20.7% 487.87 ± 2% uptime.boot
53381 ± 1% -67.3% 17448 ± 7% uptime.idle
4916499 ± 10% -30.7% 3408447 ± 0% softirqs.NET_RX
668035 ± 15% +367.2% 3121347 ± 2% softirqs.RCU
629095 ± 3% +29.8% 816830 ± 1% softirqs.SCHED
3444643 ± 3% +542.4% 22127503 ± 2% softirqs.TIMER
941847 ± 0% +30.3% 1227512 ± 0% vmstat.memory.cache
1.00 ± 0% +36366.7% 364.67 ± 19% vmstat.procs.b
11.67 ± 17% +1437.1% 179.33 ± 2% vmstat.procs.r
60855 ± 11% -55.9% 26830 ± 4% vmstat.system.cs
12673 ± 7% +739.8% 106430 ± 1% vmstat.system.in
352.92 ± 0% +23.9% 437.31 ± 1% time.elapsed_time
352.92 ± 0% +23.9% 437.31 ± 1% time.elapsed_time.max
1.895e+08 ± 10% -30.0% 1.327e+08 ± 0% time.minor_page_faults
1052 ± 7% +746.1% 8903 ± 1% time.percent_of_cpu_this_job_got
2073 ± 7% +1715.8% 37650 ± 2% time.system_time
1644 ± 10% -21.7% 1287 ± 0% time.user_time
8298349 ± 13% -48.2% 4296696 ± 5% time.voluntary_context_switches
5.486e+08 ± 17% +80.6% 9.91e+08 ± 12% cpuidle.C1-HSW.time
2.765e+09 ± 22% -67.7% 8.921e+08 ± 11% cpuidle.C1E-HSW.time
3551940 ± 13% -86.8% 470084 ± 11% cpuidle.C1E-HSW.usage
3.63e+08 ± 3% +384.0% 1.757e+09 ± 12% cpuidle.C3-HSW.time
439297 ± 1% +54.1% 677092 ± 8% cpuidle.C3-HSW.usage
4.368e+10 ± 1% -58.1% 1.83e+10 ± 1% cpuidle.C6-HSW.time
5469354 ± 9% -56.8% 2361450 ± 3% cpuidle.C6-HSW.usage
1.179e+08 ± 11% +119.6% 2.589e+08 ± 14% cpuidle.POLL.time
988.00 ± 12% +199.9% 2963 ± 8% cpuidle.POLL.usage
48184543 ± 17% -41.6% 28158682 ± 9% numa-numastat.node0.local_node
48196904 ± 17% -41.5% 28174412 ± 9% numa-numastat.node0.numa_hit
12361 ± 17% +27.3% 15730 ± 4% numa-numastat.node0.other_node
43586002 ± 10% -20.6% 34591781 ± 8% numa-numastat.node1.local_node
43592187 ± 10% -20.6% 34599930 ± 8% numa-numastat.node1.numa_hit
46908094 ± 5% -27.4% 34074424 ± 8% numa-numastat.node2.local_node
46917366 ± 5% -27.3% 34087462 ± 8% numa-numastat.node2.numa_hit
44223596 ± 11% -30.1% 30903064 ± 8% numa-numastat.node3.local_node
44237505 ± 11% -30.1% 30916293 ± 8% numa-numastat.node3.numa_hit
7.67 ± 7% +753.7% 65.45 ± 1% turbostat.%Busy
223.00 ± 7% +749.2% 1893 ± 1% turbostat.Avg_MHz
21.89 ± 5% +14.3% 25.02 ± 2% turbostat.CPU%c1
0.90 ± 6% +94.8% 1.75 ± 12% turbostat.CPU%c3
69.55 ± 0% -88.8% 7.79 ± 0% turbostat.CPU%c6
42.00 ± 1% +30.2% 54.67 ± 3% turbostat.CoreTmp
5.33 ± 10% -84.4% 0.83 ± 9% turbostat.Pkg%pc2
0.60 ± 31% -75.7% 0.15 ± 58% turbostat.Pkg%pc6
46.67 ± 4% +25.7% 58.67 ± 0% turbostat.PkgTmp
262.91 ± 0% +83.7% 483.07 ± 0% turbostat.PkgWatt
233.44 ± 0% +2.2% 238.47 ± 0% turbostat.RAMWatt
699933 ± 2% +22.7% 859108 ± 1% meminfo.Active
596870 ± 2% +26.9% 757230 ± 1% meminfo.Active(anon)
584065 ± 3% +11.8% 652888 ± 1% meminfo.AnonPages
508898 ± 0% +16.6% 593287 ± 0% meminfo.Cached
2321912 ± 2% +18.4% 2749325 ± 4% meminfo.Committed_AS
5969346 ± 0% +17.7% 7024098 ± 0% meminfo.DirectMap2M
10525 ± 0% +13.7% 11967 ± 1% meminfo.Inactive(anon)
57595 ± 1% +248.1% 200467 ± 2% meminfo.KernelStack
18442 ± 2% +12.2% 20693 ± 0% meminfo.Mapped
112198 ± 1% +23.7% 138840 ± 0% meminfo.PageTables
84558 ± 0% +21.2% 102517 ± 1% meminfo.SReclaimable
348470 ± 0% +53.1% 533390 ± 1% meminfo.SUnreclaim
24174 ± 9% +348.7% 108462 ± 2% meminfo.Shmem
433028 ± 0% +46.9% 635908 ± 1% meminfo.Slab
149068 ± 2% +27.1% 189400 ± 1% proc-vmstat.nr_active_anon
145815 ± 3% +11.9% 163138 ± 1% proc-vmstat.nr_anon_pages
127230 ± 0% +16.7% 148491 ± 0% proc-vmstat.nr_file_pages
2634 ± 0% +14.1% 3006 ± 1% proc-vmstat.nr_inactive_anon
0.00 ± 0% +Inf% 462.33 ± 15% proc-vmstat.nr_isolated_anon
3623 ± 1% +244.2% 12473 ± 2% proc-vmstat.nr_kernel_stack
4620 ± 2% +12.5% 5197 ± 1% proc-vmstat.nr_mapped
27939 ± 1% +24.3% 34731 ± 0% proc-vmstat.nr_page_table_pages
6049 ± 9% +351.0% 27285 ± 2% proc-vmstat.nr_shmem
21138 ± 0% +21.4% 25663 ± 1% proc-vmstat.nr_slab_reclaimable
87101 ± 0% +53.1% 133353 ± 1% proc-vmstat.nr_slab_unreclaimable
0.00 ± 0% +Inf% 870680 ± 2% proc-vmstat.numa_hint_faults
0.00 ± 0% +Inf% 406936 ± 3% proc-vmstat.numa_hint_faults_local
1.829e+08 ± 11% -30.2% 1.278e+08 ± 0% proc-vmstat.numa_hit
1.829e+08 ± 11% -30.2% 1.277e+08 ± 0% proc-vmstat.numa_local
41727 ± 0% +20.2% 50144 ± 0% proc-vmstat.numa_other
0.00 ± 0% +Inf% 258503 ± 3% proc-vmstat.numa_pages_migrated
0.00 ± 0% +Inf% 1389875 ± 2% proc-vmstat.numa_pte_updates
6374 ± 13% +254.1% 22575 ± 2% proc-vmstat.pgactivate
766567 ± 17% -39.9% 460866 ± 9% proc-vmstat.pgalloc_dma32
1.862e+08 ± 11% -29.1% 1.32e+08 ± 0% proc-vmstat.pgalloc_normal
1.906e+08 ± 10% -29.7% 1.339e+08 ± 0% proc-vmstat.pgfault
1.869e+08 ± 11% -29.2% 1.323e+08 ± 0% proc-vmstat.pgfree
0.00 ± 0% +Inf% 2076 ± 4% proc-vmstat.pgmigrate_fail
0.00 ± 0% +Inf% 258503 ± 3% proc-vmstat.pgmigrate_success
26527 ± 23% +104.1% 54152 ± 16% numa-meminfo.node0.KernelStack
3090 ± 3% +80.5% 5578 ± 50% numa-meminfo.node0.Mapped
921890 ± 1% +9.2% 1007082 ± 4% numa-meminfo.node0.MemUsed
108135 ± 14% +22.1% 132019 ± 12% numa-meminfo.node0.SUnreclaim
692.67 ± 32% +562.7% 4590 ± 86% numa-meminfo.node0.Shmem
185924 ± 7% +41.3% 262685 ± 19% numa-meminfo.node1.Active
159207 ± 8% +49.7% 238288 ± 21% numa-meminfo.node1.Active(anon)
151747 ± 6% +15.6% 175383 ± 12% numa-meminfo.node1.AnonPages
10565 ± 74% +348.1% 47347 ± 9% numa-meminfo.node1.KernelStack
801350 ± 7% +17.1% 938412 ± 6% numa-meminfo.node1.MemUsed
22990 ± 15% +34.3% 30874 ± 7% numa-meminfo.node1.SReclaimable
82401 ± 25% +58.8% 130874 ± 5% numa-meminfo.node1.SUnreclaim
105391 ± 16% +53.5% 161749 ± 4% numa-meminfo.node1.Slab
12599 ± 48% +352.5% 57016 ± 2% numa-meminfo.node2.KernelStack
773326 ± 2% +19.0% 920096 ± 1% numa-meminfo.node2.MemUsed
28753 ± 26% +60.8% 46232 ± 17% numa-meminfo.node2.PageTables
85146 ± 14% +75.2% 149181 ± 1% numa-meminfo.node2.SUnreclaim
108938 ± 14% +62.1% 176621 ± 3% numa-meminfo.node2.Slab
159543 ± 16% +44.1% 229926 ± 26% numa-meminfo.node3.Active
134722 ± 20% +51.8% 204566 ± 29% numa-meminfo.node3.Active(anon)
7948 ± 42% +427.7% 41941 ± 12% numa-meminfo.node3.KernelStack
5114 ± 56% +50.8% 7714 ± 38% numa-meminfo.node3.Mapped
707514 ± 8% +24.1% 878069 ± 7% numa-meminfo.node3.MemUsed
72837 ± 13% +66.8% 121492 ± 9% numa-meminfo.node3.SUnreclaim
3674 ± 98% +970.8% 39342 ±121% numa-meminfo.node3.Shmem
90421 ± 9% +60.3% 144902 ± 6% numa-meminfo.node3.Slab
1658 ± 23% +104.0% 3384 ± 16% numa-vmstat.node0.nr_kernel_stack
772.00 ± 3% +81.2% 1399 ± 50% numa-vmstat.node0.nr_mapped
172.67 ± 32% +565.3% 1148 ± 86% numa-vmstat.node0.nr_shmem
27026 ± 14% +22.1% 32987 ± 12% numa-vmstat.node0.nr_slab_unreclaimable
22869693 ± 20% -47.9% 11904640 ± 10% numa-vmstat.node0.numa_hit
22823515 ± 20% -48.1% 11856155 ± 10% numa-vmstat.node0.numa_local
39771 ± 8% +50.4% 59823 ± 20% numa-vmstat.node1.nr_active_anon
37907 ± 6% +16.6% 44186 ± 12% numa-vmstat.node1.nr_anon_pages
0.00 ± 0% +Inf% 122.33 ± 8% numa-vmstat.node1.nr_isolated_anon
664.00 ± 75% +340.8% 2927 ± 8% numa-vmstat.node1.nr_kernel_stack
5747 ± 15% +33.9% 7698 ± 7% numa-vmstat.node1.nr_slab_reclaimable
20592 ± 25% +58.6% 32652 ± 5% numa-vmstat.node1.nr_slab_unreclaimable
20571250 ± 12% -26.6% 15099745 ± 9% numa-vmstat.node1.numa_hit
20512586 ± 12% -26.7% 15039547 ± 9% numa-vmstat.node1.numa_local
787.00 ± 48% +351.6% 3554 ± 2% numa-vmstat.node2.nr_kernel_stack
7141 ± 26% +62.1% 11577 ± 15% numa-vmstat.node2.nr_page_table_pages
21277 ± 14% +75.3% 37298 ± 1% numa-vmstat.node2.nr_slab_unreclaimable
21913451 ± 6% -31.2% 15076742 ± 12% numa-vmstat.node2.numa_hit
21818774 ± 6% -31.3% 14979744 ± 12% numa-vmstat.node2.numa_local
33642 ± 20% +52.6% 51332 ± 29% numa-vmstat.node3.nr_active_anon
0.00 ± 0% +Inf% 117.00 ± 18% numa-vmstat.node3.nr_isolated_anon
489.33 ± 43% +431.1% 2599 ± 13% numa-vmstat.node3.nr_kernel_stack
1277 ± 56% +50.7% 1925 ± 38% numa-vmstat.node3.nr_mapped
917.00 ± 98% +969.0% 9803 ±121% numa-vmstat.node3.nr_shmem
18198 ± 13% +66.8% 30362 ± 9% numa-vmstat.node3.nr_slab_unreclaimable
20763416 ± 12% -36.4% 13211498 ± 9% numa-vmstat.node3.numa_hit
20664485 ± 12% -36.5% 13114307 ± 9% numa-vmstat.node3.numa_local
112783 ± 5% -9.2% 102353 ± 0% slabinfo.anon_vma.active_objs
57839 ± 1% +46.3% 84628 ± 3% slabinfo.cred_jar.active_objs
1386 ± 1% +53.5% 2127 ± 3% slabinfo.cred_jar.active_slabs
58232 ± 1% +53.5% 89381 ± 3% slabinfo.cred_jar.num_objs
1386 ± 1% +53.5% 2127 ± 3% slabinfo.cred_jar.num_slabs
106001 ± 0% +66.1% 176110 ± 2% slabinfo.dentry.active_objs
2615 ± 0% +62.0% 4235 ± 2% slabinfo.dentry.active_slabs
109861 ± 0% +62.0% 177924 ± 2% slabinfo.dentry.num_objs
2615 ± 0% +62.0% 4235 ± 2% slabinfo.dentry.num_slabs
47974 ± 2% +580.1% 326281 ± 5% slabinfo.kmalloc-256.active_objs
1400 ± 0% +282.7% 5358 ± 4% slabinfo.kmalloc-256.active_slabs
89624 ± 0% +282.6% 342936 ± 4% slabinfo.kmalloc-256.num_objs
1400 ± 0% +282.7% 5358 ± 4% slabinfo.kmalloc-256.num_slabs
19286 ± 4% +28.4% 24755 ± 4% slabinfo.kmalloc-512.active_objs
19445 ± 4% +27.9% 24875 ± 4% slabinfo.kmalloc-512.num_objs
8541 ± 1% +11.8% 9546 ± 1% slabinfo.mm_struct.active_objs
18599 ± 2% +279.3% 70553 ± 2% slabinfo.pid.active_objs
290.00 ± 2% +281.3% 1105 ± 2% slabinfo.pid.active_slabs
18599 ± 2% +280.7% 70800 ± 2% slabinfo.pid.num_objs
290.00 ± 2% +281.3% 1105 ± 2% slabinfo.pid.num_slabs
27990 ± 1% -13.1% 24330 ± 1% slabinfo.proc_inode_cache.active_objs
28081 ± 0% -13.4% 24330 ± 1% slabinfo.proc_inode_cache.num_objs
15869 ± 0% +72.3% 27341 ± 3% slabinfo.radix_tree_node.active_objs
283.00 ± 0% +72.6% 488.33 ± 3% slabinfo.radix_tree_node.active_slabs
15869 ± 0% +72.6% 27391 ± 3% slabinfo.radix_tree_node.num_objs
283.00 ± 0% +72.6% 488.33 ± 3% slabinfo.radix_tree_node.num_slabs
38727 ± 0% +14.9% 44516 ± 3% slabinfo.shmem_inode_cache.active_objs
799.67 ± 0% +15.0% 919.67 ± 3% slabinfo.shmem_inode_cache.active_slabs
39214 ± 0% +15.0% 45090 ± 3% slabinfo.shmem_inode_cache.num_objs
799.67 ± 0% +15.0% 919.67 ± 3% slabinfo.shmem_inode_cache.num_slabs
8815 ± 2% +20.7% 10642 ± 1% slabinfo.sighand_cache.active_objs
610.67 ± 2% +17.6% 718.33 ± 1% slabinfo.sighand_cache.active_slabs
9167 ± 2% +17.6% 10778 ± 1% slabinfo.sighand_cache.num_objs
610.67 ± 2% +17.6% 718.33 ± 1% slabinfo.sighand_cache.num_slabs
16257 ± 1% +16.1% 18868 ± 2% slabinfo.signal_cache.active_objs
16613 ± 1% +17.4% 19512 ± 2% slabinfo.signal_cache.num_objs
4120 ± 1% +209.5% 12750 ± 2% slabinfo.task_struct.active_objs
1428 ± 0% +201.3% 4304 ± 2% slabinfo.task_struct.active_slabs
4287 ± 0% +201.2% 12913 ± 2% slabinfo.task_struct.num_objs
1428 ± 0% +201.3% 4304 ± 2% slabinfo.task_struct.num_slabs
9460 ± 5% -13.0% 8228 ± 0% slabinfo.tw_sock_TCP.active_objs
9460 ± 5% -13.0% 8228 ± 0% slabinfo.tw_sock_TCP.num_objs
0.00 ± -1% +Inf% 40.39 ± 9% perf-profile.____fput.task_work_run.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
3.16 ± 18% -100.0% 0.00 ± -1% perf-profile.__do_page_fault.do_page_fault.page_fault
1.15 ± 10% -100.0% 0.00 ± -1% perf-profile.__do_softirq.irq_exit.scheduler_ipi.smp_reschedule_interrupt.reschedule_interrupt
0.00 ± -1% +Inf% 40.39 ± 9% perf-profile.__fput.____fput.task_work_run.exit_to_usermode_loop.syscall_return_slowpath
0.95 ± 12% -100.0% 0.00 ± -1% perf-profile.__libc_fork
1.57 ± 14% -100.0% 0.00 ± -1% perf-profile._do_fork.sys_clone.do_syscall_64.return_from_SYSCALL_64
0.00 ± -1% +Inf% 3.69 ± 8% perf-profile._raw_spin_lock.dcache_dir_close.__fput.____fput.task_work_run
0.00 ± -1% +Inf% 47.51 ± 16% perf-profile._raw_spin_lock.dcache_readdir.iterate_dir.sys_getdents.entry_SYSCALL_64_fastpath
18.57 ± 13% -95.4% 0.86 ± 8% perf-profile._raw_spin_lock.iterate_dir.sys_getdents.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 26.77 ± 10% perf-profile._raw_spin_trylock.dput.dcache_dir_close.__fput.____fput
1.50 ± 9% -100.0% 0.00 ± -1% perf-profile.call_cpuidle.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
23.62 ± 0% -88.3% 2.77 ± 5% perf-profile.call_cpuidle.cpu_startup_entry.start_secondary
9.82 ± 3% -100.0% 0.00 ± -1% perf-profile.call_rwsem_down_write_failed.down_write.dcache_dir_lseek.sys_lseek.entry_SYSCALL_64_fastpath
20.75 ± 2% -100.0% 0.00 ± -1% perf-profile.call_rwsem_down_write_failed.down_write.iterate_dir.sys_getdents.entry_SYSCALL_64_fastpath
0.77 ± 36% -100.0% 0.00 ± -1% perf-profile.copy_process.part.30._do_fork.sys_clone.do_syscall_64.return_from_SYSCALL_64
1.51 ± 9% -100.0% 0.00 ± -1% perf-profile.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
24.62 ± 1% -88.6% 2.81 ± 4% perf-profile.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 5.70 ± 70% perf-profile.cpu_stopper_thread.smpboot_thread_fn.kthread.ret_from_fork
1.49 ± 10% -100.0% 0.00 ± -1% perf-profile.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init.start_kernel
23.61 ± 0% -88.3% 2.77 ± 5% perf-profile.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
23.40 ± 0% -88.2% 2.76 ± 4% perf-profile.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 40.38 ± 9% perf-profile.dcache_dir_close.__fput.____fput.task_work_run.exit_to_usermode_loop
9.91 ± 3% -100.0% 0.00 ± -1% perf-profile.dcache_dir_lseek.sys_lseek.entry_SYSCALL_64_fastpath
1.27 ± 6% +3652.1% 47.78 ± 15% perf-profile.dcache_readdir.iterate_dir.sys_getdents.entry_SYSCALL_64_fastpath
2.84 ± 4% -100.0% 0.00 ± -1% perf-profile.do_execveat_common.isra.38.sys_execve.do_syscall_64.return_from_SYSCALL_64
1.67 ± 12% -100.0% 0.00 ± -1% perf-profile.do_execveat_common.isra.38.sys_execve.do_syscall_64.return_from_SYSCALL_64.execve
1.50 ± 14% -100.0% 0.00 ± -1% perf-profile.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
1.22 ± 4% -100.0% 0.00 ± -1% perf-profile.do_filp_open.do_sys_open.sys_creat.entry_SYSCALL_64_fastpath
1.50 ± 14% -100.0% 0.00 ± -1% perf-profile.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
1.20 ± 29% -100.0% 0.00 ± -1% perf-profile.do_munmap.sys_brk.entry_SYSCALL_64_fastpath
3.19 ± 18% -100.0% 0.00 ± -1% perf-profile.do_page_fault.page_fault
1.33 ± 4% -100.0% 0.00 ± -1% perf-profile.do_sys_open.sys_creat.entry_SYSCALL_64_fastpath
4.59 ± 7% -96.1% 0.18 ±141% perf-profile.do_syscall_64.return_from_SYSCALL_64
1.68 ± 13% -100.0% 0.00 ± -1% perf-profile.do_syscall_64.return_from_SYSCALL_64.execve
9.83 ± 3% -100.0% 0.00 ± -1% perf-profile.down_write.dcache_dir_lseek.sys_lseek.entry_SYSCALL_64_fastpath
20.77 ± 2% -100.0% 0.00 ± -1% perf-profile.down_write.iterate_dir.sys_getdents.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 36.20 ± 9% perf-profile.dput.dcache_dir_close.__fput.____fput.task_work_run
58.93 ± 2% +52.4% 89.80 ± 4% perf-profile.entry_SYSCALL_64_fastpath
1.69 ± 12% -100.0% 0.00 ± -1% perf-profile.execve
1.12 ± 13% -100.0% 0.00 ± -1% perf-profile.exit_mmap.mmput.do_exit.do_group_exit.sys_exit_group
0.00 ± -1% +Inf% 40.41 ± 9% perf-profile.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
0.90 ± 14% -100.0% 0.00 ± -1% perf-profile.filemap_map_pages.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
2.82 ± 19% -100.0% 0.00 ± -1% perf-profile.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
2.45 ± 18% -100.0% 0.00 ± -1% perf-profile.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
21.93 ± 5% -88.0% 2.64 ± 3% perf-profile.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
1.18 ± 11% -100.0% 0.00 ± -1% perf-profile.irq_exit.scheduler_ipi.smp_reschedule_interrupt.reschedule_interrupt.cpuidle_enter
1.17 ± 12% -100.0% 0.00 ± -1% perf-profile.load_elf_binary.search_binary_handler.do_execveat_common.isra.38.sys_execve.do_syscall_64
0.00 ± -1% +Inf% 3.88 ± 8% perf-profile.lockref_put_return.dput.dcache_dir_close.__fput.____fput
1.12 ± 13% -100.0% 0.00 ± -1% perf-profile.mmput.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 5.69 ± 70% perf-profile.multi_cpu_stop.cpu_stopper_thread.smpboot_thread_fn.kthread.ret_from_fork
0.00 ± -1% +Inf% 47.42 ± 16% perf-profile.native_queued_spin_lock_slowpath._raw_spin_lock.dcache_readdir.iterate_dir.sys_getdents
3.94 ± 20% -100.0% 0.00 ± -1% perf-profile.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.dcache_dir_lseek
6.91 ± 20% -100.0% 0.00 ± -1% perf-profile.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.iterate_dir
3.21 ± 18% -100.0% 0.00 ± -1% perf-profile.page_fault
1.20 ± 4% -100.0% 0.00 ± -1% perf-profile.path_openat.do_filp_open.do_sys_open.sys_creat.entry_SYSCALL_64_fastpath
1.18 ± 11% -100.0% 0.00 ± -1% perf-profile.reschedule_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init
1.51 ± 9% -100.0% 0.00 ± -1% perf-profile.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
4.60 ± 7% -96.1% 0.18 ±141% perf-profile.return_from_SYSCALL_64
1.68 ± 13% -100.0% 0.00 ± -1% perf-profile.return_from_SYSCALL_64.execve
1.14 ± 10% -100.0% 0.00 ± -1% perf-profile.run_rebalance_domains.__do_softirq.irq_exit.scheduler_ipi.smp_reschedule_interrupt
9.82 ± 3% -100.0% 0.00 ± -1% perf-profile.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.dcache_dir_lseek.sys_lseek
20.75 ± 2% -100.0% 0.00 ± -1% perf-profile.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.iterate_dir.sys_getdents
5.79 ± 8% -100.0% 0.00 ± -1% perf-profile.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.dcache_dir_lseek
13.64 ± 13% -100.0% 0.00 ± -1% perf-profile.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.iterate_dir
1.18 ± 11% -100.0% 0.00 ± -1% perf-profile.scheduler_ipi.smp_reschedule_interrupt.reschedule_interrupt.cpuidle_enter.call_cpuidle
1.17 ± 12% -100.0% 0.00 ± -1% perf-profile.search_binary_handler.do_execveat_common.isra.38.sys_execve.do_syscall_64.return_from_SYSCALL_64
1.18 ± 11% -100.0% 0.00 ± -1% perf-profile.smp_reschedule_interrupt.reschedule_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.17 ±141% +3328.0% 5.71 ± 70% perf-profile.smpboot_thread_fn.kthread.ret_from_fork
1.51 ± 9% -100.0% 0.00 ± -1% perf-profile.start_kernel.x86_64_start_reservations.x86_64_start_kernel
24.70 ± 1% -88.6% 2.81 ± 4% perf-profile.start_secondary
1.43 ± 29% -100.0% 0.00 ± -1% perf-profile.sys_brk.entry_SYSCALL_64_fastpath
1.60 ± 14% -100.0% 0.00 ± -1% perf-profile.sys_clone.do_syscall_64.return_from_SYSCALL_64
1.35 ± 4% -100.0% 0.00 ± -1% perf-profile.sys_creat.entry_SYSCALL_64_fastpath
2.88 ± 4% -100.0% 0.00 ± -1% perf-profile.sys_execve.do_syscall_64.return_from_SYSCALL_64
1.68 ± 13% -100.0% 0.00 ± -1% perf-profile.sys_execve.do_syscall_64.return_from_SYSCALL_64.execve
1.50 ± 14% -100.0% 0.00 ± -1% perf-profile.sys_exit_group.entry_SYSCALL_64_fastpath
9.92 ± 3% -100.0% 0.00 ± -1% perf-profile.sys_lseek.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 40.41 ± 9% perf-profile.syscall_return_slowpath.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 40.39 ± 9% perf-profile.task_work_run.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
0.83 ± 28% -100.0% 0.00 ± -1% perf-profile.unmap_region.do_munmap.sys_brk.entry_SYSCALL_64_fastpath
1.51 ± 9% -100.0% 0.00 ± -1% perf-profile.x86_64_start_kernel
1.51 ± 9% -100.0% 0.00 ± -1% perf-profile.x86_64_start_reservations.x86_64_start_kernel
829.37 ± 93% +4.5e+05% 3712300 ± 15% sched_debug.cfs_rq:/.MIN_vruntime.avg
119429 ± 93% +19512.3% 23422909 ± 9% sched_debug.cfs_rq:/.MIN_vruntime.max
9917 ± 93% +71917.5% 7142609 ± 8% sched_debug.cfs_rq:/.MIN_vruntime.stddev
3.21 ± 4% +6575.4% 214.12 ± 40% sched_debug.cfs_rq:/.load.avg
109.00 ± 15% +7665.8% 8464 ±122% sched_debug.cfs_rq:/.load.max
16.09 ± 11% +5426.3% 889.45 ± 93% sched_debug.cfs_rq:/.load.stddev
15.28 ± 40% +1359.4% 223.01 ± 38% sched_debug.cfs_rq:/.load_avg.avg
219.94 ± 11% +3713.8% 8388 ±124% sched_debug.cfs_rq:/.load_avg.max
39.68 ± 28% +2129.6% 884.70 ± 93% sched_debug.cfs_rq:/.load_avg.stddev
829.37 ± 93% +4.5e+05% 3712300 ± 15% sched_debug.cfs_rq:/.max_vruntime.avg
119429 ± 93% +19512.3% 23422909 ± 9% sched_debug.cfs_rq:/.max_vruntime.max
9917 ± 93% +71917.5% 7142609 ± 8% sched_debug.cfs_rq:/.max_vruntime.stddev
814966 ± 17% +2450.9% 20789003 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
1390384 ± 22% +1767.6% 25967271 ± 1% sched_debug.cfs_rq:/.min_vruntime.max
347827 ± 5% +4440.8% 15793966 ± 2% sched_debug.cfs_rq:/.min_vruntime.min
388051 ± 31% +499.6% 2326901 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.06 ± 38% +936.7% 0.65 ± 15% sched_debug.cfs_rq:/.nr_running.avg
1.00 ± 0% +100.0% 2.00 ± 10% sched_debug.cfs_rq:/.nr_running.max
0.21 ± 4% +174.3% 0.58 ± 9% sched_debug.cfs_rq:/.nr_running.stddev
1.56 ± 9% +12856.6% 202.00 ± 42% sched_debug.cfs_rq:/.runnable_load_avg.avg
66.06 ± 16% +12219.4% 8137 ±125% sched_debug.cfs_rq:/.runnable_load_avg.max
8.02 ± 11% +10565.5% 855.17 ± 95% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-505397 ±-39% +598.7% -3531421 ± -4% sched_debug.cfs_rq:/.spread0.avg
70987 ± 50% +2221.6% 1648031 ± 24% sched_debug.cfs_rq:/.spread0.max
-973130 ±-32% +776.3% -8527672 ± -5% sched_debug.cfs_rq:/.spread0.min
388593 ± 31% +499.0% 2327750 ± 6% sched_debug.cfs_rq:/.spread0.stddev
82.02 ± 47% +514.1% 503.68 ± 13% sched_debug.cfs_rq:/.util_avg.avg
825.72 ± 2% +23.0% 1015 ± 0% sched_debug.cfs_rq:/.util_avg.max
148.13 ± 6% +157.7% 381.70 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
1000000 ± 0% +42.9% 1428578 ± 2% sched_debug.cpu.avg_idle.max
45692 ± 5% +171.0% 123825 ± 29% sched_debug.cpu.avg_idle.min
200593 ± 1% +29.5% 259832 ± 1% sched_debug.cpu.clock.avg
200617 ± 1% +29.5% 259876 ± 1% sched_debug.cpu.clock.max
200562 ± 1% +29.5% 259755 ± 1% sched_debug.cpu.clock.min
14.70 ± 14% +120.1% 32.35 ± 37% sched_debug.cpu.clock.stddev
200593 ± 1% +29.5% 259832 ± 1% sched_debug.cpu.clock_task.avg
200617 ± 1% +29.5% 259876 ± 1% sched_debug.cpu.clock_task.max
200562 ± 1% +29.5% 259755 ± 1% sched_debug.cpu.clock_task.min
14.70 ± 14% +120.1% 32.35 ± 37% sched_debug.cpu.clock_task.stddev
4.43 ± 4% +4542.0% 205.65 ± 41% sched_debug.cpu.cpu_load[0].avg
232.83 ± 3% +3447.2% 8259 ±123% sched_debug.cpu.cpu_load[0].max
27.55 ± 0% +3070.6% 873.34 ± 93% sched_debug.cpu.cpu_load[0].stddev
3.30 ± 10% +6122.5% 205.26 ± 41% sched_debug.cpu.cpu_load[1].avg
174.61 ± 16% +4605.4% 8216 ±124% sched_debug.cpu.cpu_load[1].max
19.61 ± 12% +4317.9% 866.29 ± 94% sched_debug.cpu.cpu_load[1].stddev
2.61 ± 15% +7760.5% 204.97 ± 41% sched_debug.cpu.cpu_load[2].avg
132.22 ± 22% +6105.3% 8204 ±124% sched_debug.cpu.cpu_load[2].max
14.56 ± 17% +5826.3% 863.13 ± 94% sched_debug.cpu.cpu_load[2].stddev
2.20 ± 16% +9215.8% 204.90 ± 41% sched_debug.cpu.cpu_load[3].avg
99.11 ± 21% +8168.8% 8195 ±124% sched_debug.cpu.cpu_load[3].max
11.30 ± 17% +7522.6% 861.37 ± 94% sched_debug.cpu.cpu_load[3].stddev
2.05 ± 16% +9928.6% 205.14 ± 41% sched_debug.cpu.cpu_load[4].avg
77.39 ± 15% +10472.4% 8181 ±124% sched_debug.cpu.cpu_load[4].max
9.48 ± 14% +8964.4% 859.30 ± 95% sched_debug.cpu.cpu_load[4].stddev
4243 ± 62% +725.9% 35049 ± 11% sched_debug.cpu.curr->pid.avg
13719 ± 17% +114.2% 29382 ± 10% sched_debug.cpu.curr->pid.stddev
3.31 ± 6% +6370.5% 214.11 ± 40% sched_debug.cpu.load.avg
106.78 ± 13% +7827.4% 8464 ±122% sched_debug.cpu.load.max
16.22 ± 11% +5384.4% 889.45 ± 93% sched_debug.cpu.load.stddev
500000 ± 0% +53.0% 764948 ± 1% sched_debug.cpu.max_idle_balance_cost.max
0.00 ± 0% +Inf% 36894 ± 17% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 8% +339.0% 0.00 ± 37% sched_debug.cpu.next_balance.stddev
60875 ± 2% +180.1% 170518 ± 0% sched_debug.cpu.nr_load_updates.avg
78714 ± 5% +145.4% 193190 ± 0% sched_debug.cpu.nr_load_updates.max
44765 ± 3% +231.7% 148501 ± 0% sched_debug.cpu.nr_load_updates.min
10794 ± 7% +15.2% 12434 ± 2% sched_debug.cpu.nr_load_updates.stddev
0.08 ± 62% +1154.1% 1.06 ± 18% sched_debug.cpu.nr_running.avg
1.83 ± 64% +254.5% 6.50 ± 5% sched_debug.cpu.nr_running.max
0.29 ± 42% +294.9% 1.16 ± 9% sched_debug.cpu.nr_running.stddev
62835 ± 12% -44.3% 35017 ± 4% sched_debug.cpu.nr_switches.avg
127178 ± 11% -39.9% 76437 ± 8% sched_debug.cpu.nr_switches.max
22006 ± 13% -24.2% 16683 ± 12% sched_debug.cpu.nr_switches.min
33982 ± 12% -63.7% 12339 ± 4% sched_debug.cpu.nr_switches.stddev
10.66 ± 10% +12.4% 11.98 ± 4% sched_debug.cpu.nr_uninterruptible.avg
399.50 ± 52% +237.2% 1346 ± 1% sched_debug.cpu.nr_uninterruptible.max
-386.89 ±-56% +266.6% -1418 ±-14% sched_debug.cpu.nr_uninterruptible.min
270.10 ± 64% +153.9% 685.84 ± 4% sched_debug.cpu.nr_uninterruptible.stddev
200566 ± 1% +29.5% 259792 ± 1% sched_debug.cpu_clk
198070 ± 1% +30.0% 257505 ± 1% sched_debug.ktime
0.00 ± 27% +2092.7% 0.02 ± 26% sched_debug.rt_rq:/.rt_time.avg
0.02 ± 19% +3170.6% 0.65 ± 15% sched_debug.rt_rq:/.rt_time.max
0.00 ± 18% +3180.2% 0.09 ± 17% sched_debug.rt_rq:/.rt_time.stddev
200566 ± 1% +29.5% 259792 ± 1% sched_debug.sched_clk
Best Regards,
Huang, Ying
Powered by blists - more mailing lists