[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20160328062256.GC3882@yexl-desktop>
Date: Mon, 28 Mar 2016 14:22:56 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: Rik van Riel <riel@...hat.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp] [cpuidle] e132b9b3bc: No primary change, turbostat.%Busy
-65.1% change
FYI, we noticed that turbostat.%Busy -65.1% change with your commit.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit e132b9b3bc7f19e9b158e42b323881d5dee5ecf3 ("cpuidle: menu: use high confidence factors only when considering polling")
=========================================================================================
compiler/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/1HDD/5K/btrfs/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-ws02/400M/fsmark
commit:
3b99669b75db04e411bb298591224a9e8e4f57fb
e132b9b3bc7f19e9b158e42b323881d5dee5ecf3
3b99669b75db04e4 e132b9b3bc7f19e9b158e42b32
---------------- --------------------------
%stddev %change %stddev
\ | \
505.00 ± 7% +83.6% 927.00 ± 4% vmstat.memory.buff
6392 ± 35% +62.4% 10382 ± 0% numa-meminfo.node0.Mapped
2646 ±130% +226.2% 8631 ± 0% numa-meminfo.node0.Shmem
9065 ± 25% -44.8% 5008 ± 1% numa-meminfo.node1.Mapped
26.78 ± 1% -65.1% 9.34 ± 0% turbostat.%Busy
709.50 ± 1% -65.2% 246.75 ± 0% turbostat.Avg_MHz
40.46 ± 1% +39.7% 56.54 ± 1% turbostat.CPU%c1
1597 ± 35% +62.4% 2594 ± 0% numa-vmstat.node0.nr_mapped
661.25 ±130% +226.3% 2157 ± 0% numa-vmstat.node0.nr_shmem
106.00 ± 39% +220.0% 339.25 ± 61% numa-vmstat.node0.numa_other
2266 ± 25% -44.8% 1251 ± 1% numa-vmstat.node1.nr_mapped
4.795e+08 ± 4% +117.9% 1.045e+09 ± 2% cpuidle.C1-NHM.time
463937 ± 2% +73.2% 803714 ± 2% cpuidle.C1-NHM.usage
1.699e+08 ± 3% -8.6% 1.553e+08 ± 1% cpuidle.C1E-NHM.time
7.062e+08 ± 0% -84.0% 1.131e+08 ± 5% cpuidle.POLL.time
440162 ± 1% -79.7% 89501 ± 6% cpuidle.POLL.usage
0.00 ± -1% +Inf% 8824 ± 70% latency_stats.avg.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013.do_one_initcall.do_init_module
0.00 ± -1% +Inf% 12106 ± 71% latency_stats.avg.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013
0.00 ± -1% +Inf% 15174 ± 70% latency_stats.max.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013.do_one_initcall.do_init_module
0.00 ± -1% +Inf% 108570 ± 80% latency_stats.max.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013
0.00 ± -1% +Inf% 92833 ± 71% latency_stats.sum.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013.do_one_initcall.do_init_module
0.00 ± -1% +Inf% 811913 ± 70% latency_stats.sum.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013
-9221 ±-11% -19.9% -7385 ±-11% sched_debug.cfs_rq:/.spread0.avg
591.90 ± 62% +112.9% 1260 ± 30% sched_debug.cfs_rq:/.spread0.max
-14064 ± -6% -11.1% -12500 ± -6% sched_debug.cfs_rq:/.spread0.min
306.42 ± 40% -41.9% 178.00 ± 8% sched_debug.cpu.load.max
75.40 ± 31% -33.6% 50.09 ± 13% sched_debug.cpu.load.stddev
714.67 ± 1% -9.9% 644.00 ± 5% sched_debug.cpu.nr_uninterruptible.max
1149 ± 9% -15.9% 967.25 ± 3% slabinfo.avc_xperms_node.active_objs
1149 ± 9% -15.9% 967.25 ± 3% slabinfo.avc_xperms_node.num_objs
1020 ± 8% +28.2% 1308 ± 3% slabinfo.btrfs_trans_handle.active_objs
1020 ± 8% +28.2% 1308 ± 3% slabinfo.btrfs_trans_handle.num_objs
351.75 ± 11% +39.2% 489.50 ± 8% slabinfo.btrfs_transaction.active_objs
351.75 ± 11% +39.2% 489.50 ± 8% slabinfo.btrfs_transaction.num_objs
544.00 ± 10% +20.6% 656.00 ± 12% slabinfo.kmem_cache_node.active_objs
544.00 ± 10% +20.6% 656.00 ± 12% slabinfo.kmem_cache_node.num_objs
lkp-ws02: Westmere-EP
Memory: 16G
turbostat.Avg_MHz
800 ++--------------------------------------------------------------------+
**.****.****.* **.****.****.** *.****.****.****.****.****.** *. * |
700 ++ * * * ** *.***
| |
600 ++ |
| |
500 ++ |
| |
400 ++ |
| |
300 ++ OO O O O |
| O OO O O O OO OOOO OOO |
200 ++ |
OO OOOO OOO |
100 ++--------------------------------------------------------------------+
turbostat._Busy
30 ++---------------------------------------------------------------------+
**.****.****.* **.***.****.*** .***.* *.****.****.***.****.* * * |
| * * ** * *.* **.**
25 ++ |
| |
| |
20 ++ |
| |
15 ++ |
| |
| |
10 ++ O OOOO OOO OOOO |
| OOOO OOO O |
OO OOOO O O |
5 ++-------O-------------------------------------------------------------+
turbostat.CPU_c1
65 ++---------------------------------------------------------------------+
| O |
60 OO OO O O |
| O O |
| OOO OO O O OOOO OOO O |
55 ++ O O O |
| O O |
50 ++ |
| |
45 ++ |
| |
**.****. *. ***.***.****.****.***.* **.****.****.***.****.****.****. |
40 ++ *** * * **
| |
35 ++---------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong Ye
View attachment "job.yaml" of type "text/plain" (3844 bytes)
View attachment "reproduce" of type "text/plain" (639 bytes)
Powered by blists - more mailing lists