[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1436878948.1264.6.camel@intel.com>
Date: Tue, 14 Jul 2015 21:02:28 +0800
From: Huang Ying <ying.huang@...el.com>
To: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc: LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: [lkp] [sched/tip] 30874f21ed9: No primary result change, +195.6%
fileio.time.minor_page_faults
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/core
commit 30874f21ed9bf392ca910600df8bb1dbfd9beabb ("sched/tip:Prefer numa hotness over cache hotness")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/period/nr_threads/disk/fs/size/filenum/rwmode/iomode:
bay/fileio/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/600s/100%/1HDD/btrfs/64G/1024f/seqwr/sync
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
3746 ± 2% +3.5% 3876 ± 0% fileio.time.maximum_resident_set_size
666.50 ± 0% +195.6% 1970 ± 2% fileio.time.minor_page_faults
666.50 ± 0% +195.6% 1970 ± 2% time.minor_page_faults
124968 ±100% +279.4% 474125 ±100% latency_stats.avg.btrfs_async_run_delayed_refs.[btrfs].__btrfs_end_transaction.[btrfs].btrfs_end_transaction.[btrfs].btrfs_cont_expand.[btrfs].btrfs_file_write_iter.[btrfs].__vfs_write.vfs_write.SyS_pwrite64.entry_SYSCALL_64_fastpath
16379 ± 5% +53.2% 25088 ± 18% latency_stats.sum.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.entry_SYSCALL_64_fastpath
576.00 ± 10% +20.7% 695.25 ± 11% slabinfo.btrfs_delayed_data_ref.active_objs
576.00 ± 10% +20.7% 695.25 ± 11% slabinfo.btrfs_delayed_data_ref.num_objs
29.75 ± 20% +135.3% 70.00 ± 36% sched_debug.cfs_rq[2]:/.runnable_load_avg
119.50 ± 65% +186.4% 342.25 ± 32% sched_debug.cfs_rq[2]:/.utilization_load_avg
4219 ± 7% +33.4% 5629 ± 26% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
91.00 ± 6% +34.3% 122.25 ± 26% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
39.50 ± 61% +91.8% 75.75 ± 18% sched_debug.cpu#1.cpu_load[0]
27.00 ± 52% +108.3% 56.25 ± 22% sched_debug.cpu#1.cpu_load[1]
19.00 ± 34% +117.1% 41.25 ± 24% sched_debug.cpu#1.cpu_load[2]
14.00 ± 22% +116.1% 30.25 ± 27% sched_debug.cpu#1.cpu_load[3]
12.00 ± 26% +104.2% 24.50 ± 27% sched_debug.cpu#1.cpu_load[4]
24.25 ± 39% +166.0% 64.50 ± 28% sched_debug.cpu#2.cpu_load[0]
21.25 ± 34% +132.9% 49.50 ± 33% sched_debug.cpu#2.cpu_load[1]
18.75 ± 34% +118.7% 41.00 ± 47% sched_debug.cpu#2.cpu_load[2]
16.75 ± 39% +119.4% 36.75 ± 62% sched_debug.cpu#2.cpu_load[3]
984.25 ± 44% +98.8% 1956 ± 26% sched_debug.cpu#2.curr->pid
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/period/nr_threads/disk/fs/size/filenum/rwmode/iomode:
bay/fileio/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/600s/100%/1HDD/f2fs/64G/1024f/seqwr/sync
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
668.00 ± 0% +185.1% 1904 ± 3% fileio.time.minor_page_faults
3504570 ± 0% -1.0% 3471202 ± 0% fileio.time.voluntary_context_switches
77542 ± 18% -46.6% 41420 ± 34% proc-vmstat.pgscan_direct_dma32
668.00 ± 0% +185.1% 1904 ± 3% time.minor_page_faults
26397 ± 0% -8.7% 24113 ± 13% vmstat.system.cs
0.00 ± -1% +Inf% 1432132 ±166% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2499827 ±169% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1.28e+08 ± 0% +7.8% 1.38e+08 ± 3% latency_stats.sum.balance_dirty_pages.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write.vfs_write.SyS_pwrite64.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 5611872 ±171% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
8.50 ±105% +523.5% 53.00 ± 50% sched_debug.cfs_rq[0]:/.runnable_load_avg
4840 ± 24% +47.1% 7118 ± 20% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
105.00 ± 24% +47.4% 154.75 ± 20% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
4176 ± 9% +53.7% 6417 ± 20% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
90.25 ± 9% +54.3% 139.25 ± 20% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
17.75 ± 30% +260.6% 64.00 ± 65% sched_debug.cpu#0.cpu_load[0]
1213614 ± 94% -59.3% 494542 ± 5% sched_debug.cpu#0.ttwu_count
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/period/nr_threads/disk/fs/size/filenum/rwmode/iomode:
bay/fileio/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/600s/100%/1HDD/xfs/64G/1024f/seqwr/sync
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
665.75 ± 0% +193.9% 1956 ± 2% fileio.time.minor_page_faults
1661 ± 4% +10.8% 1841 ± 4% proc-vmstat.pgalloc_dma
665.75 ± 0% +193.9% 1956 ± 2% time.minor_page_faults
24.00 ± 5% +24.0% 29.75 ± 6% sched_debug.cfs_rq[1]:/.nr_spread_over
25.00 ± 4% -21.0% 19.75 ± 19% sched_debug.cfs_rq[3]:/.nr_spread_over
1784 ± 37% -49.3% 904.00 ± 28% sched_debug.cpu#1.curr->pid
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_threads/disk/fs/filesize/test_size/sync_method/nr_directories/nr_files_per_directory:
nhm4/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1x/32t/1HDD/btrfs/8K/400M/fsyncBeforeClose/16d/256fpd
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
8052 ± 0% +54.0% 12401 ± 5% fsmark.time.minor_page_faults
73486184 ± 4% +7.9% 79292570 ± 4% cpuidle.C3-NHM.time
8052 ± 0% +54.0% 12401 ± 5% time.minor_page_faults
298.75 ± 18% +26.6% 378.25 ± 10% slabinfo.ext4_free_data.active_objs
298.75 ± 18% +26.6% 378.25 ± 10% slabinfo.ext4_free_data.num_objs
27801 ± 21% +35.0% 37530 ± 10% sched_debug.cfs_rq[0]:/.tg_load_avg
10055 ± 16% +34.7% 13540 ± 12% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
27802 ± 21% +34.5% 37391 ± 9% sched_debug.cfs_rq[1]:/.tg_load_avg
27671 ± 21% +34.4% 37194 ± 10% sched_debug.cfs_rq[2]:/.tg_load_avg
27672 ± 21% +29.6% 35868 ± 15% sched_debug.cfs_rq[3]:/.tg_load_avg
27622 ± 21% +29.9% 35868 ± 15% sched_debug.cfs_rq[4]:/.tg_load_avg
27635 ± 21% +29.8% 35868 ± 15% sched_debug.cfs_rq[5]:/.tg_load_avg
27632 ± 21% +29.8% 35868 ± 15% sched_debug.cfs_rq[6]:/.tg_load_avg
27521 ± 21% +30.3% 35864 ± 15% sched_debug.cfs_rq[7]:/.tg_load_avg
5438 ± 31% +12265.8% 672512 ±104% sched_debug.cpu#1.ttwu_local
12.25 ± 63% +167.3% 32.75 ± 19% sched_debug.cpu#4.cpu_load[1]
65.50 ± 21% -67.2% 21.50 ± 93% sched_debug.cpu#6.cpu_load[2]
46.50 ± 15% -62.4% 17.50 ± 75% sched_debug.cpu#6.cpu_load[3]
33.25 ± 16% -45.1% 18.25 ± 54% sched_debug.cpu#6.cpu_load[4]
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_threads/disk/fs/filesize/test_size/sync_method/nr_directories/nr_files_per_directory:
nhm4/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1x/32t/1HDD/btrfs/9B/400M/fsyncBeforeClose/16d/256fpd
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
10640 ± 0% +87.8% 19978 ± 2% fsmark.time.minor_page_faults
17490 ± 6% -75.5% 4277 ± 8% latency_stats.sum.wait_woken.inotify_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
10640 ± 0% +87.8% 19978 ± 2% time.minor_page_faults
1533083 ± 72% -73.1% 413085 ± 1% sched_debug.cpu#1.nr_switches
1533200 ± 72% -73.0% 413220 ± 1% sched_debug.cpu#1.sched_count
650722 ± 80% -81.4% 120841 ± 3% sched_debug.cpu#1.sched_goidle
289935 ± 60% +82.6% 529534 ± 21% sched_debug.cpu#5.avg_idle
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/mode/ipc:
mtp-ivy1/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/50%/threads/socket
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
663.53 ± 0% -1.1% 656.02 ± 0% hackbench.time.elapsed_time
663.53 ± 0% -1.1% 656.02 ± 0% hackbench.time.elapsed_time.max
2.236e+08 ± 0% -2.4% 2.182e+08 ± 1% hackbench.time.involuntary_context_switches
5959 ± 0% +516.4% 36734 ± 4% hackbench.time.minor_page_faults
4904 ± 0% -1.1% 4850 ± 0% hackbench.time.system_time
4.841e+08 ± 0% -1.9% 4.749e+08 ± 1% hackbench.time.voluntary_context_switches
670026 ± 31% +92.5% 1289662 ± 18% cpuidle.C3-IVB.time
5959 ± 0% +516.4% 36734 ± 4% time.minor_page_faults
0.05 ± 43% +115.8% 0.10 ± 21% turbostat.Pkg%pc3
0.45 ± 9% +163.3% 1.19 ± 2% perf-profile.cpu-cycles.__kernel_text_address.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
15.81 ± 1% -22.1% 12.31 ± 0% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
1.69 ± 3% +13.9% 1.93 ± 3% perf-profile.cpu-cycles.__module_text_address.is_module_text_address.__kernel_text_address.print_context_stack.dump_trace
1.32 ± 1% -13.4% 1.15 ± 8% perf-profile.cpu-cycles.__switch_to
6.81 ± 2% -59.7% 2.74 ± 1% perf-profile.cpu-cycles.is_ftrace_trampoline.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
1.78 ± 2% +73.0% 3.07 ± 1% perf-profile.cpu-cycles.is_module_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
1.04 ± 6% +13.2% 1.18 ± 6% perf-profile.cpu-cycles.kfree.skb_free_head.skb_release_data.skb_release_all.consume_skb
1.41 ± 2% -10.1% 1.27 ± 2% perf-profile.cpu-cycles.pick_next_task_fair.__schedule.schedule.schedule_timeout.unix_stream_recvmsg
1.29 ± 2% +15.1% 1.49 ± 3% perf-profile.cpu-cycles.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.94 ± 3% +22.4% 1.15 ± 5% perf-profile.cpu-cycles.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
1.05 ± 1% +18.1% 1.24 ± 6% perf-profile.cpu-cycles.security_file_permission.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.07 ± 7% +13.3% 1.21 ± 4% perf-profile.cpu-cycles.skb_free_head.skb_release_data.skb_release_all.consume_skb.unix_stream_recvmsg
1.27 ± 4% +14.6% 1.46 ± 2% perf-profile.cpu-cycles.skb_release_data.skb_release_all.consume_skb.unix_stream_recvmsg.sock_recvmsg
138.75 ± 10% -21.1% 109.50 ± 7% sched_debug.cfs_rq[0]:/.load
132.00 ± 7% -15.0% 112.25 ± 2% sched_debug.cfs_rq[0]:/.runnable_load_avg
746.75 ± 5% +22.1% 912.00 ± 17% sched_debug.cfs_rq[1]:/.utilization_load_avg
311731 ± 0% -9.3% 282874 ± 4% sched_debug.cfs_rq[2]:/.exec_clock
7.50 ± 29% +63.3% 12.25 ± 20% sched_debug.cfs_rq[2]:/.nr_spread_over
2722211 ± 1% -7.3% 2524484 ± 4% sched_debug.cfs_rq[3]:/.min_vruntime
16.75 ± 14% -40.3% 10.00 ± 39% sched_debug.cfs_rq[4]:/.nr_spread_over
2746419 ± 0% -7.2% 2550006 ± 4% sched_debug.cfs_rq[5]:/.min_vruntime
1014 ± 8% -16.3% 849.00 ± 16% sched_debug.cfs_rq[5]:/.utilization_load_avg
312033 ± 0% -7.3% 289288 ± 3% sched_debug.cfs_rq[6]:/.exec_clock
268.00 ± 49% +153.7% 680.00 ± 41% sched_debug.cfs_rq[7]:/.blocked_load_avg
2744206 ± 0% -7.5% 2537905 ± 4% sched_debug.cfs_rq[7]:/.min_vruntime
397.50 ± 32% +100.6% 797.25 ± 35% sched_debug.cfs_rq[7]:/.tg_load_contrib
136.25 ± 10% -21.5% 107.00 ± 11% sched_debug.cpu#0.load
9222208 ± 1% -9.1% 8382999 ± 2% sched_debug.cpu#0.ttwu_local
30430538 ± 2% -7.1% 28271798 ± 3% sched_debug.cpu#1.ttwu_count
35870 ± 19% +353.0% 162504 ± 61% sched_debug.cpu#2.sched_goidle
30831893 ± 3% -9.9% 27785159 ± 4% sched_debug.cpu#3.ttwu_count
10432629 ± 7% -18.8% 8470000 ± 8% sched_debug.cpu#3.ttwu_local
6.25 ± 77% -116.0% -1.00 ±-538% sched_debug.cpu#4.nr_uninterruptible
30690275 ± 0% -8.6% 28038013 ± 3% sched_debug.cpu#4.ttwu_count
30663532 ± 2% -9.5% 27765762 ± 4% sched_debug.cpu#5.ttwu_count
9498405 ± 2% -10.7% 8483506 ± 4% sched_debug.cpu#5.ttwu_local
44973638 ± 2% -10.3% 40347779 ± 6% sched_debug.cpu#6.nr_switches
44973914 ± 2% -10.3% 40348157 ± 6% sched_debug.cpu#6.sched_count
314540 ± 0% -7.2% 291752 ± 4% sched_debug.cpu#7.nr_load_updates
45078605 ± 1% -11.7% 39798883 ± 3% sched_debug.cpu#7.nr_switches
45079120 ± 1% -11.7% 39799188 ± 3% sched_debug.cpu#7.sched_count
9584278 ± 5% -11.5% 8482116 ± 5% sched_debug.cpu#7.ttwu_local
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/blocksize:
xps2/pigz/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/100%/128K
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
12488 ± 1% +647.0% 93290 ± 4% pigz.time.minor_page_faults
7154 ± 24% +147.1% 17676 ± 45% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
337520 ± 0% +23.8% 417772 ± 1% proc-vmstat.pgfault
12488 ± 1% +647.0% 93290 ± 4% time.minor_page_faults
1.20 ± 7% +23.7% 1.49 ± 12% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.pipe_write.__vfs_write.vfs_write
1.38 ± 1% +26.9% 1.76 ± 15% perf-profile.cpu-cycles.alloc_pages_current.pipe_write.__vfs_write.vfs_write.sys_write
1.38 ± 26% +46.8% 2.03 ± 12% perf-profile.cpu-cycles.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
2.07 ± 3% -29.3% 1.47 ± 16% perf-profile.cpu-cycles.tick_do_update_jiffies64.tick_sched_do_timer.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.76 ± 7% -37.8% 1.10 ± 26% perf-profile.cpu-cycles.update_wall_time.tick_do_update_jiffies64.tick_sched_do_timer.tick_sched_timer.__hrtimer_run_queues
13.50 ± 6% -46.3% 7.25 ± 15% sched_debug.cfs_rq[0]:/.nr_spread_over
4.00 ± 17% +68.8% 6.75 ± 28% sched_debug.cfs_rq[6]:/.nr_spread_over
120.00 ± 7% -13.5% 103.75 ± 9% sched_debug.cpu#1.cpu_load[4]
5.25 ± 34% -128.6% -1.50 ±-218% sched_debug.cpu#1.nr_uninterruptible
970.50 ± 15% +67.7% 1627 ± 23% sched_debug.cpu#4.sched_goidle
188.00 ± 42% -50.8% 92.50 ± 6% sched_debug.cpu#5.load
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/blocksize:
xps2/pigz/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/100%/512K
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
3752893 ± 0% +4.2% 3908831 ± 3% pigz.time.involuntary_context_switches
28250 ± 0% +605.5% 199301 ± 2% pigz.time.minor_page_faults
6677 ± 66% +437.6% 35901 ± 52% latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
353061 ± 0% +48.0% 522561 ± 0% proc-vmstat.pgfault
3118 ± 7% +12.8% 3516 ± 3% slabinfo.kmalloc-192.active_objs
28250 ± 0% +605.5% 199301 ± 2% time.minor_page_faults
154.19 ± 2% +9.6% 168.94 ± 4% uptime.idle
0.34 ± 42% +104.4% 0.70 ± 27% perf-profile.cpu-cycles.__update_cpu_load.update_cpu_load_active.scheduler_tick.update_process_times.tick_sched_handle
1.71 ± 9% +20.1% 2.06 ± 6% perf-profile.cpu-cycles.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.59 ± 23% +73.1% 1.03 ± 12% perf-profile.cpu-cycles.sched_clock_cpu.update_rq_clock.scheduler_tick.update_process_times.tick_sched_handle
10.03 ± 4% +11.6% 11.19 ± 3% perf-profile.cpu-cycles.tick_sched_handle.isra.17.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
0.66 ± 23% +51.5% 1.00 ± 29% perf-profile.cpu-cycles.update_cpu_load_active.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
8.50 ± 30% -38.2% 5.25 ± 20% sched_debug.cfs_rq[1]:/.nr_spread_over
1029 ± 8% -11.4% 911.75 ± 10% sched_debug.cfs_rq[3]:/.utilization_load_avg
88.00 ± 0% +10.8% 97.50 ± 8% sched_debug.cfs_rq[5]:/.load
88.50 ± 1% +13.8% 100.75 ± 10% sched_debug.cfs_rq[5]:/.runnable_load_avg
-2.50 ±-34% -130.0% 0.75 ±110% sched_debug.cpu#0.nr_uninterruptible
127.25 ± 5% -15.1% 108.00 ± 6% sched_debug.cpu#1.cpu_load[3]
127.25 ± 3% -16.7% 106.00 ± 4% sched_debug.cpu#1.cpu_load[4]
1369 ± 40% +51.6% 2076 ± 6% sched_debug.cpu#1.curr->pid
51700 ± 17% -30.3% 36053 ± 30% sched_debug.cpu#4.ttwu_count
458569 ± 22% +54.8% 709974 ± 13% sched_debug.cpu#5.avg_idle
88.50 ± 1% +13.8% 100.75 ± 10% sched_debug.cpu#5.cpu_load[0]
88.00 ± 0% +10.8% 97.50 ± 8% sched_debug.cpu#5.load
9.00 ± 33% -75.0% 2.25 ± 79% sched_debug.cpu#5.nr_uninterruptible
-11.50 ±-81% -119.6% 2.25 ±155% sched_debug.cpu#6.nr_uninterruptible
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
wsm/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/context_switch1
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
11171 ± 0% +14.1% 12745 ± 0% will-it-scale.time.minor_page_faults
1134537 ±141% +263.3% 4121317 ±117% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
11171 ± 0% +14.1% 12745 ± 0% time.minor_page_faults
0.81 ± 5% +94.4% 1.58 ± 3% perf-profile.cpu-cycles.__kernel_text_address.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
11.19 ± 0% -19.9% 8.96 ± 1% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
1.28 ± 2% -9.2% 1.17 ± 1% perf-profile.cpu-cycles.__module_address.is_module_text_address.__kernel_text_address.print_context_stack.dump_trace
1.68 ± 14% +20.6% 2.03 ± 7% perf-profile.cpu-cycles.__remove_hrtimer.hrtimer_try_to_cancel.hrtimer_cancel.tick_nohz_restart.tick_nohz_idle_exit
6.59 ± 1% +39.6% 9.20 ± 0% perf-profile.cpu-cycles.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
1.55 ± 7% -55.6% 0.69 ± 7% perf-profile.cpu-cycles._raw_spin_unlock_irqrestore.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry
4.09 ± 1% +72.9% 7.07 ± 1% perf-profile.cpu-cycles.deactivate_task.__schedule.schedule.pipe_wait.pipe_read
2.60 ± 2% +116.8% 5.63 ± 0% perf-profile.cpu-cycles.dequeue_entity.dequeue_task_fair.dequeue_task.deactivate_task.__schedule
4.02 ± 1% +74.0% 6.99 ± 0% perf-profile.cpu-cycles.dequeue_task.deactivate_task.__schedule.schedule.pipe_wait
3.44 ± 1% +88.2% 6.47 ± 0% perf-profile.cpu-cycles.dequeue_task_fair.dequeue_task.deactivate_task.__schedule.schedule
3.56 ± 10% +14.6% 4.08 ± 5% perf-profile.cpu-cycles.hrtimer_try_to_cancel.hrtimer_cancel.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry
4.30 ± 0% -56.0% 1.89 ± 4% perf-profile.cpu-cycles.is_ftrace_trampoline.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
0.92 ± 3% +17.7% 1.08 ± 3% perf-profile.cpu-cycles.is_ftrace_trampoline.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
1.38 ± 2% +10.5% 1.52 ± 2% perf-profile.cpu-cycles.is_module_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
8.57 ± 0% +27.0% 10.89 ± 0% perf-profile.cpu-cycles.pipe_wait.pipe_read.__vfs_read.vfs_read.sys_read
1.07 ± 2% +31.5% 1.41 ± 2% perf-profile.cpu-cycles.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
7.04 ± 1% +36.3% 9.59 ± 0% perf-profile.cpu-cycles.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
0.90 ± 2% +36.6% 1.22 ± 3% perf-profile.cpu-cycles.security_file_permission.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
2.27 ± 3% -18.0% 1.86 ± 3% perf-profile.cpu-cycles.select_task_rq_fair.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
0.75 ± 1% +45.0% 1.09 ± 4% perf-profile.cpu-cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write.sys_write
1.36 ± 6% +29.5% 1.76 ± 11% perf-profile.cpu-cycles.tick_program_event.__remove_hrtimer.hrtimer_start_range_ns.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter
0.87 ± 2% +30.0% 1.13 ± 2% perf-profile.cpu-cycles.update_cfs_shares.dequeue_entity.dequeue_task_fair.dequeue_task.deactivate_task
516.25 ±136% +301.7% 2073 ± 61% sched_debug.cfs_rq[0]:/.blocked_load_avg
14524 ± 11% +18.9% 17267 ± 10% sched_debug.cfs_rq[0]:/.tg_load_avg
596.25 ±116% +270.8% 2211 ± 60% sched_debug.cfs_rq[0]:/.tg_load_contrib
14343 ± 11% +19.5% 17147 ± 10% sched_debug.cfs_rq[10]:/.tg_load_avg
14335 ± 11% +19.4% 17110 ± 10% sched_debug.cfs_rq[11]:/.tg_load_avg
1.75 ± 47% +171.4% 4.75 ± 27% sched_debug.cfs_rq[1]:/.nr_spread_over
14489 ± 11% +19.0% 17248 ± 10% sched_debug.cfs_rq[1]:/.tg_load_avg
14483 ± 11% +19.1% 17253 ± 10% sched_debug.cfs_rq[2]:/.tg_load_avg
455.25 ± 27% +28.8% 586.50 ± 10% sched_debug.cfs_rq[2]:/.utilization_load_avg
14430 ± 10% +19.4% 17232 ± 10% sched_debug.cfs_rq[3]:/.tg_load_avg
14424 ± 10% +19.3% 17213 ± 10% sched_debug.cfs_rq[4]:/.tg_load_avg
14420 ± 10% +19.3% 17199 ± 10% sched_debug.cfs_rq[5]:/.tg_load_avg
14408 ± 10% +19.3% 17192 ± 10% sched_debug.cfs_rq[6]:/.tg_load_avg
14382 ± 10% +19.4% 17179 ± 10% sched_debug.cfs_rq[7]:/.tg_load_avg
1865 ± 18% -65.5% 642.75 ± 56% sched_debug.cfs_rq[8]:/.blocked_load_avg
14377 ± 10% +19.4% 17163 ± 10% sched_debug.cfs_rq[8]:/.tg_load_avg
1914 ± 18% -62.4% 719.00 ± 53% sched_debug.cfs_rq[8]:/.tg_load_contrib
14368 ± 10% +19.4% 17154 ± 10% sched_debug.cfs_rq[9]:/.tg_load_avg
72.25 ± 9% +34.6% 97.25 ± 9% sched_debug.cpu#0.cpu_load[0]
290172 ± 7% -16.0% 243745 ± 17% sched_debug.cpu#1.avg_idle
54.25 ± 3% +12.9% 61.25 ± 3% sched_debug.cpu#1.cpu_load[2]
54.00 ± 3% +13.9% 61.50 ± 3% sched_debug.cpu#1.cpu_load[3]
54.00 ± 3% +10.2% 59.50 ± 4% sched_debug.cpu#1.cpu_load[4]
19375134 ± 14% -12.2% 17015561 ± 4% sched_debug.cpu#2.nr_switches
19375189 ± 14% -12.2% 17015610 ± 4% sched_debug.cpu#2.sched_count
5658065 ± 24% -20.9% 4473053 ± 9% sched_debug.cpu#2.sched_goidle
1600 ± 21% +28.5% 2056 ± 9% sched_debug.cpu#3.curr->pid
42.75 ± 6% +18.1% 50.50 ± 8% sched_debug.cpu#5.cpu_load[2]
88.50 ± 19% -32.8% 59.50 ± 28% sched_debug.cpu#6.cpu_load[0]
0.30 ± 93% -78.8% 0.06 ±102% sched_debug.rt_rq[1]:/.rt_time
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
lkp-a03/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/lock1
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
7478 ± 0% +1.2% 7569 ± 0% will-it-scale.time.maximum_resident_set_size
8398 ± 0% +6.2% 8920 ± 0% will-it-scale.time.minor_page_faults
51892 ± 17% +27.6% 66228 ± 12% meminfo.DirectMap4k
29992 ± 8% +12.8% 33842 ± 4% softirqs.RCU
29.63 ± 4% +10.0% 32.61 ± 3% time.user_time
17682 ± 7% +5.3% 18625 ± 7% vmstat.system.cs
247.25 ± 2% -15.5% 209.00 ± 15% sched_debug.cfs_rq[0]:/.load
853.50 ± 1% -11.9% 751.75 ± 7% sched_debug.cfs_rq[0]:/.utilization_load_avg
35.25 ± 4% -18.4% 28.75 ± 5% sched_debug.cfs_rq[3]:/.nr_spread_over
2348 ± 1% -8.6% 2145 ± 5% sched_debug.cpu#0.curr->pid
247.00 ± 2% -15.4% 209.00 ± 15% sched_debug.cpu#0.load
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
wsm/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/malloc2
commit:
6a1debd14048f8067c4d0ed8b41149a3ef178cbf
30874f21ed9bf392ca910600df8bb1dbfd9beabb
6a1debd14048f806 30874f21ed9bf392ca910600df
---------------- --------------------------
%stddev %change %stddev
\ | \
11144 ± 0% +12.5% 12538 ± 0% will-it-scale.time.minor_page_faults
202651 ± 3% -14.4% 173458 ± 2% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
11144 ± 0% +12.5% 12538 ± 0% time.minor_page_faults
16.86 ± 31% -73.8% 4.41 ±100% perf-profile.cpu-cycles.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
57.17 ± 10% +36.3% 77.94 ± 18% perf-profile.cpu-cycles.cpu_startup_entry.start_secondary
16.80 ± 31% -73.9% 4.39 ±100% perf-profile.cpu-cycles.cpuidle_enter.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
16.11 ± 31% -73.5% 4.27 ±100% perf-profile.cpu-cycles.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.rest_init.start_kernel
56.70 ± 10% +36.7% 77.50 ± 18% perf-profile.cpu-cycles.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
0.97 ±100% -93.8% 0.06 ±-1666% perf-profile.cpu-cycles.do_set_pte.filemap_map_pages.handle_pte_fault.handle_mm_fault.__do_page_fault
1.96 ± 78% -77.8% 0.43 ±135% perf-profile.cpu-cycles.filemap_map_pages.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
3.18 ± 34% -75.9% 0.77 ± 85% perf-profile.cpu-cycles.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
15.91 ± 33% -74.9% 4.00 ±101% perf-profile.cpu-cycles.poll_idle.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.rest_init
56.07 ± 11% +37.8% 77.28 ± 18% perf-profile.cpu-cycles.poll_idle.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
17.10 ± 30% -66.8% 5.68 ± 59% perf-profile.cpu-cycles.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
17.10 ± 30% -66.8% 5.68 ± 59% perf-profile.cpu-cycles.start_kernel.x86_64_start_reservations.x86_64_start_kernel
57.18 ± 10% +36.3% 77.94 ± 18% perf-profile.cpu-cycles.start_secondary
17.10 ± 30% -66.8% 5.68 ± 59% perf-profile.cpu-cycles.x86_64_start_kernel
17.10 ± 30% -66.8% 5.68 ± 59% perf-profile.cpu-cycles.x86_64_start_reservations.x86_64_start_kernel
6.50 ± 38% -61.5% 2.50 ± 72% sched_debug.cfs_rq[1]:/.nr_spread_over
-197566 ±-35% +57.5% -311256 ±-11% sched_debug.cfs_rq[4]:/.spread0
851471 ± 8% -10.5% 762192 ± 2% sched_debug.cfs_rq[6]:/.min_vruntime
735.50 ± 44% +98.3% 1458 ± 47% sched_debug.cfs_rq[8]:/.blocked_load_avg
796.75 ± 38% +91.2% 1523 ± 46% sched_debug.cfs_rq[8]:/.tg_load_contrib
52.75 ± 25% +73.9% 91.75 ± 29% sched_debug.cfs_rq[9]:/.load
610312 ± 9% -12.3% 535184 ± 1% sched_debug.cfs_rq[9]:/.min_vruntime
52.75 ± 25% +65.4% 87.25 ± 29% sched_debug.cfs_rq[9]:/.runnable_load_avg
1373175 ±101% -99.2% 10693 ± 42% sched_debug.cpu#1.ttwu_count
58262 ± 0% +11.2% 64777 ± 6% sched_debug.cpu#10.nr_load_updates
39758 ± 47% -60.2% 15813 ± 55% sched_debug.cpu#11.nr_switches
39788 ± 47% -60.2% 15839 ± 55% sched_debug.cpu#11.sched_count
18721 ± 51% -68.6% 5869 ± 55% sched_debug.cpu#11.sched_goidle
73.50 ± 10% -20.1% 58.75 ± 7% sched_debug.cpu#4.cpu_load[2]
68.50 ± 5% -16.8% 57.00 ± 7% sched_debug.cpu#4.cpu_load[3]
64.50 ± 6% -13.6% 55.75 ± 7% sched_debug.cpu#4.cpu_load[4]
965204 ±119% -99.0% 10127 ± 80% sched_debug.cpu#7.nr_switches
965234 ±119% -98.9% 10157 ± 79% sched_debug.cpu#7.sched_count
492436 ±119% -98.7% 6337 ± 96% sched_debug.cpu#7.ttwu_count
478601 ±120% -99.8% 733.50 ± 51% sched_debug.cpu#7.ttwu_local
52.75 ± 25% +65.4% 87.25 ± 29% sched_debug.cpu#9.cpu_load[0]
53.25 ± 24% +30.0% 69.25 ± 17% sched_debug.cpu#9.cpu_load[1]
1920 ± 8% +21.6% 2334 ± 4% sched_debug.cpu#9.curr->pid
52.75 ± 25% +73.9% 91.75 ± 29% sched_debug.cpu#9.load
0.69 ± 67% +358.0% 3.14 ±114% sched_debug.rt_rq[0]:/.rt_time
155259 ± 0% -42.2% 89723 ± 0% sched_debug.sysctl_sched.sysctl_sched_features
bay: Pentium D
Memory: 2G
nhm4: Nehalem
Memory: 4G
mtp-ivy1: Ivy Bridge
Memory: 32G
xps2: Nehalem
Memory: 4G
wsm: Westmere
Memory: 6G
lkp-a03: Atom
Memory: 8G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3516 bytes)
View attachment "reproduce" of type "text/plain" (527 bytes)
Powered by blists - more mailing lists