lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <874mjskyoo.fsf@yhuang-dev.intel.com>
Date:	Sat, 22 Aug 2015 07:29:43 +0800
From:	kernel test robot <ying.huang@...el.com>
TO:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>
CC:	Ingo Molnar <mingo@...nel.org>
Subject: [lkp] [sched] d4573c3e1c: -5.9% unixbench.score

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit d4573c3e1c992668f5dcd57d1c2ced56ae9650b9 ("sched: Improve load balancing in the presence of idle CPUs")


=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
  nhm-white/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/execl

commit: 
  dfbca41f347997e57048a53755611c8e2d792924
  d4573c3e1c992668f5dcd57d1c2ced56ae9650b9

dfbca41f347997e5 d4573c3e1c992668f5dcd57d1c 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      4725 ±  0%      -2.7%       4599 ±  0%  unixbench.score
   2123335 ±  0%      -1.7%    2087061 ±  0%  unixbench.time.involuntary_context_switches
  99575417 ±  0%      -2.3%   97252046 ±  0%  unixbench.time.minor_page_faults
    317.00 ±  0%      -2.2%     310.00 ±  0%  unixbench.time.percent_of_cpu_this_job_got
    515.93 ±  0%      -2.3%     504.21 ±  0%  unixbench.time.system_time
    450501 ±  0%      -4.9%     428319 ±  0%  unixbench.time.voluntary_context_switches
    301368 ±  0%     -11.4%     267086 ±  0%  softirqs.SCHED
  49172197 ±  0%     -10.0%   44274425 ±  0%  cpuidle.C1E-NHM.time
    613281 ±  0%     -17.6%     505485 ±  0%  cpuidle.C1E-NHM.usage
    232695 ±  5%     -12.9%     202734 ±  1%  latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
    470921 ±  2%     -10.7%     420710 ±  1%  latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.system_call_fastpath
     42.74 ±  0%      -2.2%      41.81 ±  0%  turbostat.%Busy
      1232 ±  0%      -2.2%       1205 ±  0%  turbostat.Avg_MHz
     45.75 ± 33%    +100.0%      91.50 ± 17%  sched_debug.cfs_rq[1]:/.load
      1788 ± 22%     -64.0%     644.25 ± 61%  sched_debug.cfs_rq[3]:/.blocked_load_avg
      1950 ± 22%     -62.9%     724.00 ± 55%  sched_debug.cfs_rq[3]:/.tg_load_contrib
   -315.00 ± -5%     +13.1%    -356.25 ± -5%  sched_debug.cpu#0.nr_uninterruptible
     69.00 ±  6%     +12.3%      77.50 ±  4%  sched_debug.cpu#1.cpu_load[3]
     45.75 ± 33%    +100.0%      91.50 ± 17%  sched_debug.cpu#1.load
    449022 ±  6%     +15.4%     518171 ±  4%  sched_debug.cpu#2.avg_idle
    624.00 ± 62%    +115.9%       1347 ± 26%  sched_debug.cpu#2.curr->pid
   -403.75 ± -4%     +26.9%    -512.25 ± -9%  sched_debug.cpu#2.nr_uninterruptible
   -433.00 ± -4%     +18.0%    -511.00 ±-11%  sched_debug.cpu#3.nr_uninterruptible
    315.50 ±  6%     +31.5%     415.00 ±  9%  sched_debug.cpu#4.nr_uninterruptible
    399.00 ±  4%     +18.2%     471.75 ±  7%  sched_debug.cpu#5.nr_uninterruptible
    407.50 ±  0%     +18.6%     483.25 ±  4%  sched_debug.cpu#6.nr_uninterruptible
    402.00 ±  8%     +20.8%     485.50 ±  2%  sched_debug.cpu#7.nr_uninterruptible

=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  lituya/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/execl

commit: 
  dfbca41f347997e57048a53755611c8e2d792924
  d4573c3e1c992668f5dcd57d1c2ced56ae9650b9

dfbca41f347997e5 d4573c3e1c992668f5dcd57d1c 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     10886 ±  0%      -5.9%      10249 ±  0%  unixbench.score
   4700905 ±  0%      -4.9%    4468392 ±  0%  unixbench.time.involuntary_context_switches
  2.16e+08 ±  0%      -5.4%  2.044e+08 ±  0%  unixbench.time.minor_page_faults
    554.50 ±  0%      -9.9%     499.50 ±  2%  unixbench.time.percent_of_cpu_this_job_got
    902.06 ±  0%      -8.1%     828.84 ±  0%  unixbench.time.system_time
    192.66 ±  0%      -7.6%     177.94 ±  0%  unixbench.time.user_time
   2695111 ±  0%     -10.9%    2400967 ±  3%  unixbench.time.voluntary_context_switches
    525929 ±  0%     -25.9%     389861 ±  1%  softirqs.SCHED
   2695111 ±  0%     -10.9%    2400967 ±  3%  time.voluntary_context_switches
    121703 ±  0%      -6.6%     113648 ±  2%  vmstat.system.cs
     26498 ±  0%      -6.2%      24852 ±  2%  vmstat.system.in
   6429895 ±  0%      -9.8%    5800792 ±  0%  cpuidle.C1-HSW.usage
   1927178 ±  0%     -20.6%    1529231 ±  1%  cpuidle.C1E-HSW.usage
  50019600 ±  3%     +92.3%   96177564 ±  4%  cpuidle.C3-HSW.time
    721114 ±  2%     +40.6%    1013620 ±  4%  cpuidle.C3-HSW.usage
    928033 ±  2%     +13.8%    1056036 ±  2%  cpuidle.C6-HSW.usage
     36.80 ±  0%      -9.6%      33.28 ±  2%  turbostat.%Busy
      1214 ±  0%      -9.5%       1098 ±  2%  turbostat.Avg_MHz
      0.07 ± 14%     +85.7%       0.13 ± 35%  turbostat.Pkg%pc2
      3.68 ± 10%     +85.5%       6.83 ± 35%  turbostat.Pkg%pc6
     53.85 ±  0%      -4.2%      51.61 ±  2%  turbostat.PkgWatt
      0.00 ± -1%      +Inf%    4318915 ±150%  latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.new_sync_write.vfs_write.SyS_write.system_call_fastpath
     11715 ±101%   +3928.9%     471985 ±157%  latency_stats.avg.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.system_call_fastpath
      0.00 ± -1%      +Inf%    4717643 ±134%  latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.new_sync_write.vfs_write.SyS_write.system_call_fastpath
     91245 ±109%   +4137.1%    3866124 ±152%  latency_stats.max.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.system_call_fastpath
      0.00 ± -1%      +Inf%    4757271 ±133%  latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.new_sync_write.vfs_write.SyS_write.system_call_fastpath
     19114 ±158%     -71.4%       5475 ± 67%  latency_stats.sum.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.system_call_fastpath
    109615 ± 98%   +3435.2%    3875132 ±152%  latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.system_call_fastpath
     22954 ±  4%     -10.8%      20484 ±  9%  sched_debug.cfs_rq[0]:/.exec_clock
    306157 ±  0%     -13.1%     266065 ±  5%  sched_debug.cfs_rq[0]:/.min_vruntime
     20641 ± 11%     -27.2%      15036 ± 20%  sched_debug.cfs_rq[10]:/.avg->runnable_avg_sum
     19987 ±  4%     -11.2%      17751 ±  6%  sched_debug.cfs_rq[10]:/.exec_clock
    300899 ±  0%     -12.5%     263419 ±  5%  sched_debug.cfs_rq[10]:/.min_vruntime
    452.50 ± 11%     -27.6%     327.75 ± 20%  sched_debug.cfs_rq[10]:/.tg_runnable_contrib
    300940 ±  0%     -11.9%     265047 ±  5%  sched_debug.cfs_rq[11]:/.min_vruntime
     -5222 ±-16%     -80.4%      -1022 ±-175%  sched_debug.cfs_rq[11]:/.spread0
     19789 ± 11%     -21.2%      15590 ± 16%  sched_debug.cfs_rq[12]:/.avg->runnable_avg_sum
     20790 ±  3%     -14.4%      17801 ±  6%  sched_debug.cfs_rq[12]:/.exec_clock
    302961 ±  0%     -13.0%     263467 ±  5%  sched_debug.cfs_rq[12]:/.min_vruntime
    432.75 ± 11%     -21.3%     340.75 ± 17%  sched_debug.cfs_rq[12]:/.tg_runnable_contrib
     20451 ±  5%     -12.8%      17830 ±  6%  sched_debug.cfs_rq[13]:/.exec_clock
    302744 ±  1%     -12.7%     264381 ±  5%  sched_debug.cfs_rq[13]:/.min_vruntime
      1.75 ± 47%    +171.4%       4.75 ± 40%  sched_debug.cfs_rq[13]:/.nr_spread_over
     19559 ±  0%     -11.0%      17407 ±  4%  sched_debug.cfs_rq[14]:/.exec_clock
    300081 ±  0%     -12.3%     263170 ±  4%  sched_debug.cfs_rq[14]:/.min_vruntime
     -6082 ±-15%     -52.3%      -2900 ±-60%  sched_debug.cfs_rq[14]:/.spread0
    300413 ±  0%     -11.7%     265326 ±  4%  sched_debug.cfs_rq[15]:/.min_vruntime
     -5751 ±-18%     -87.0%    -745.73 ±-366%  sched_debug.cfs_rq[15]:/.spread0
    303798 ±  0%     -13.1%     264112 ±  4%  sched_debug.cfs_rq[1]:/.min_vruntime
     84.00 ±100%    +229.5%     276.75 ± 63%  sched_debug.cfs_rq[1]:/.utilization_load_avg
    302656 ±  0%     -12.1%     266065 ±  5%  sched_debug.cfs_rq[2]:/.min_vruntime
     -3502 ±-17%    -100.0%      -1.05 ±-196837%  sched_debug.cfs_rq[2]:/.spread0
     19933 ±  3%     -13.5%      17238 ±  3%  sched_debug.cfs_rq[3]:/.exec_clock
    305206 ±  0%     -12.8%     265991 ±  4%  sched_debug.cfs_rq[3]:/.min_vruntime
     20973 ±  6%     -24.6%      15805 ± 18%  sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
     20068 ±  5%     -11.5%      17767 ±  5%  sched_debug.cfs_rq[4]:/.exec_clock
    305752 ±  0%     -12.9%     266399 ±  5%  sched_debug.cfs_rq[4]:/.min_vruntime
    461.50 ±  6%     -25.0%     346.25 ± 18%  sched_debug.cfs_rq[4]:/.tg_runnable_contrib
    303317 ±  0%     -12.6%     264993 ±  5%  sched_debug.cfs_rq[5]:/.min_vruntime
     -2842 ±-35%     -62.2%      -1073 ±-105%  sched_debug.cfs_rq[5]:/.spread0
     20814 ±  9%     -21.2%      16410 ± 26%  sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
     19473 ±  0%     -10.9%      17351 ±  4%  sched_debug.cfs_rq[6]:/.exec_clock
    304159 ±  0%     -12.7%     265678 ±  5%  sched_debug.cfs_rq[6]:/.min_vruntime
    455.75 ±  9%     -21.0%     360.00 ± 26%  sched_debug.cfs_rq[6]:/.tg_runnable_contrib
    304471 ±  0%     -11.9%     268359 ±  4%  sched_debug.cfs_rq[7]:/.min_vruntime
    298485 ±  0%     -13.3%     258901 ±  3%  sched_debug.cfs_rq[8]:/.min_vruntime
     18.00 ± 29%    +356.9%      82.25 ± 51%  sched_debug.cfs_rq[8]:/.runnable_load_avg
    231.00 ± 30%    +116.6%     500.25 ± 20%  sched_debug.cfs_rq[8]:/.utilization_load_avg
     19913 ±  3%     -13.2%      17285 ±  4%  sched_debug.cfs_rq[9]:/.exec_clock
     14.00 ± 35%    +250.0%      49.00 ± 39%  sched_debug.cfs_rq[9]:/.load
    300167 ±  0%     -13.1%     260832 ±  4%  sched_debug.cfs_rq[9]:/.min_vruntime
     -1710 ± -2%     -23.6%      -1306 ±-13%  sched_debug.cpu#0.nr_uninterruptible
    158575 ±  6%     -11.5%     140360 ±  1%  sched_debug.cpu#1.ttwu_count
    553112 ± 72%     -45.1%     303486 ±  5%  sched_debug.cpu#10.nr_switches
    554861 ± 72%     -45.1%     304840 ±  5%  sched_debug.cpu#10.sched_count
     27.00 ± 10%     -20.4%      21.50 ± 13%  sched_debug.cpu#12.cpu_load[4]
    186234 ± 90%     -58.3%      77602 ±  7%  sched_debug.cpu#12.ttwu_local
     20.50 ± 21%     +48.8%      30.50 ± 18%  sched_debug.cpu#14.cpu_load[1]
     21.50 ± 21%     +58.1%      34.00 ± 19%  sched_debug.cpu#14.cpu_load[2]
     22.75 ± 14%     +44.0%      32.75 ± 17%  sched_debug.cpu#14.cpu_load[3]
    109416 ±  5%     -13.7%      94393 ±  4%  sched_debug.cpu#14.sched_goidle
    130439 ±  2%     -12.4%     114236 ±  4%  sched_debug.cpu#14.ttwu_count
     30.50 ±101%    +301.6%     122.50 ± 25%  sched_debug.cpu#2.cpu_load[0]
     27.25 ± 47%    +209.2%      84.25 ± 34%  sched_debug.cpu#2.cpu_load[1]
     37.50 ± 27%     -32.7%      25.25 ± 21%  sched_debug.cpu#4.cpu_load[4]
     17.25 ± 43%    +362.3%      79.75 ± 56%  sched_debug.cpu#4.load
     57816 ± 15%     -15.3%      48950 ±  2%  sched_debug.cpu#4.nr_load_updates
   1220774 ±121%     -73.1%     328817 ±  6%  sched_debug.cpu#4.nr_switches
   1222807 ±121%     -73.0%     330486 ±  6%  sched_debug.cpu#4.sched_count
    542794 ±133%     -78.8%     115250 ±  5%  sched_debug.cpu#4.sched_goidle
    588622 ±125%     -76.2%     140136 ±  7%  sched_debug.cpu#4.ttwu_count
    522443 ±142%     -84.6%      80605 ±  6%  sched_debug.cpu#4.ttwu_local
    345322 ±  1%      -7.4%     319747 ±  4%  sched_debug.cpu#7.nr_switches
    347310 ±  1%      -7.5%     321411 ±  4%  sched_debug.cpu#7.sched_count
    120552 ±  2%      -9.4%     109240 ±  4%  sched_debug.cpu#7.sched_goidle
     14.00 ± 35%    +233.9%      46.75 ± 36%  sched_debug.cpu#9.load
    136971 ± 10%     -15.1%     116346 ±  1%  sched_debug.cpu#9.ttwu_count
      0.14 ± 57%    +411.4%       0.72 ± 73%  sched_debug.rt_rq[1]:/.rt_time

=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  lituya/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/powersave/execl

commit: 
  dfbca41f347997e57048a53755611c8e2d792924
  d4573c3e1c992668f5dcd57d1c2ced56ae9650b9

dfbca41f347997e5 d4573c3e1c992668f5dcd57d1c 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     10563 ±  1%      -5.8%       9952 ±  0%  unixbench.score
   4414071 ±  1%      -5.2%    4184851 ±  0%  unixbench.time.involuntary_context_switches
 2.028e+08 ±  1%      -5.5%  1.917e+08 ±  0%  unixbench.time.minor_page_faults
    540.75 ±  1%      -7.2%     502.00 ±  0%  unixbench.time.percent_of_cpu_this_job_got
    882.57 ±  1%      -7.4%     816.95 ±  0%  unixbench.time.system_time
    188.00 ±  1%      -7.0%     174.82 ±  0%  unixbench.time.user_time
   2858074 ±  3%      -8.8%    2605521 ±  0%  unixbench.time.voluntary_context_switches
    511610 ±  1%     -25.5%     381032 ±  0%  softirqs.SCHED
   2858074 ±  3%      -8.8%    2605521 ±  0%  time.voluntary_context_switches
    118783 ±  0%      -4.6%     113276 ±  0%  vmstat.system.cs
     25883 ±  0%      -4.3%      24765 ±  0%  vmstat.system.in
    683238 ±  2%      -9.4%     619110 ±  0%  latency_stats.hits.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.system_call_fastpath
     96234 ±  5%     -11.9%      84756 ±  1%  latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
    191410 ±  5%     -13.4%     165713 ±  1%  latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.padzero.load_elf_binary
     36.16 ±  1%      -6.9%      33.68 ±  0%  turbostat.%Busy
      1148 ±  1%      -7.1%       1066 ±  0%  turbostat.Avg_MHz
     50.93 ±  0%      -1.4%      50.21 ±  0%  turbostat.PkgWatt
   1928029 ±  0%     -20.0%    1543361 ±  0%  cpuidle.C1E-HSW.usage
  55003420 ±  8%     +78.9%   98419263 ±  1%  cpuidle.C3-HSW.time
    799437 ±  7%     +33.2%    1064466 ±  1%  cpuidle.C3-HSW.usage
    873657 ±  3%     +11.9%     977668 ±  1%  cpuidle.C6-HSW.usage
     19945 ±  2%      -7.9%      18369 ±  0%  sched_debug.cfs_rq[13]:/.exec_clock
     21825 ± 10%     -18.4%      17818 ±  1%  sched_debug.cfs_rq[14]:/.avg->runnable_avg_sum
    478.50 ± 10%     -18.6%     389.50 ±  1%  sched_debug.cfs_rq[14]:/.tg_runnable_contrib
    942.75 ± 31%     -70.3%     280.00 ± 62%  sched_debug.cfs_rq[15]:/.blocked_load_avg
    958.25 ± 31%     -68.3%     303.75 ± 62%  sched_debug.cfs_rq[15]:/.tg_load_contrib
     20260 ±  7%      -9.5%      18339 ±  1%  sched_debug.cfs_rq[2]:/.exec_clock
    240.25 ± 43%    +185.7%     686.50 ± 49%  sched_debug.cfs_rq[2]:/.utilization_load_avg
     18628 ±  2%      -9.2%      16914 ±  5%  sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
    405.75 ±  1%      -8.7%     370.50 ±  5%  sched_debug.cfs_rq[4]:/.tg_runnable_contrib
     19914 ±  4%     -12.0%      17521 ±  2%  sched_debug.cfs_rq[5]:/.exec_clock
     -2928 ±-101%    -146.4%       1357 ± 75%  sched_debug.cfs_rq[6]:/.spread0
     20113 ±  2%      -8.2%      18458 ±  3%  sched_debug.cfs_rq[7]:/.exec_clock
     -2088 ±-11%     -19.0%      -1691 ± -3%  sched_debug.cpu#0.nr_uninterruptible
     14.25 ± 90%    +452.6%      78.75 ± 66%  sched_debug.cpu#1.cpu_load[0]
   1057371 ±117%     -70.1%     316240 ±  2%  sched_debug.cpu#1.nr_switches
   1059402 ±116%     -70.0%     317990 ±  2%  sched_debug.cpu#1.sched_count
    472923 ±128%     -76.4%     111412 ±  3%  sched_debug.cpu#1.sched_goidle
    507097 ±122%     -72.5%     139349 ±  3%  sched_debug.cpu#1.ttwu_count
     27.50 ± 12%     +53.6%      42.25 ± 23%  sched_debug.cpu#11.cpu_load[3]
     28.00 ± 11%     +29.5%      36.25 ± 12%  sched_debug.cpu#11.cpu_load[4]
    603068 ±  3%     -23.4%     462150 ± 20%  sched_debug.cpu#13.avg_idle
    436039 ± 23%     +29.1%     563133 ±  2%  sched_debug.cpu#14.avg_idle
    402209 ±116%     -81.9%      72660 ±  3%  sched_debug.cpu#14.ttwu_local
     -2215 ±-11%     +13.7%      -2519 ± -3%  sched_debug.cpu#2.nr_uninterruptible
     41.25 ± 26%     -35.2%      26.75 ± 10%  sched_debug.cpu#3.cpu_load[2]
     34.00 ±117%    +242.6%     116.50 ± 79%  sched_debug.cpu#4.load
     -2266 ±-13%     +16.9%      -2648 ± -2%  sched_debug.cpu#4.nr_uninterruptible
    884.50 ± 33%     +44.8%       1280 ± 14%  sched_debug.cpu#5.curr->pid
     -2219 ± -7%     +11.4%      -2472 ± -3%  sched_debug.cpu#5.nr_uninterruptible
     55415 ± 20%     -19.8%      44460 ±  0%  sched_debug.cpu#6.nr_load_updates
     -2350 ± -8%     +14.2%      -2684 ± -2%  sched_debug.cpu#6.nr_uninterruptible
    847343 ± 91%     -84.0%     135604 ±  2%  sched_debug.cpu#6.ttwu_count
    784793 ± 99%     -90.3%      75880 ±  3%  sched_debug.cpu#6.ttwu_local
     39.00 ± 14%     -30.1%      27.25 ± 23%  sched_debug.cpu#7.cpu_load[3]
     35.00 ± 11%     -22.1%      27.25 ± 12%  sched_debug.cpu#7.cpu_load[4]
    128907 ±  8%     -10.8%     114951 ±  2%  sched_debug.cpu#8.ttwu_count
    624669 ± 88%     -56.2%     273343 ±  0%  sched_debug.cpu#9.nr_switches
    626375 ± 88%     -56.1%     274746 ±  0%  sched_debug.cpu#9.sched_count
    262109 ±102%     -64.9%      92004 ±  1%  sched_debug.cpu#9.sched_goidle
      2.11 ±  4%     +18.9%       2.50 ±  5%  sched_debug.rt_rq[0]:/.rt_time

=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  lituya/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/powersave/shell1

commit: 
  dfbca41f347997e57048a53755611c8e2d792924
  d4573c3e1c992668f5dcd57d1c2ced56ae9650b9

dfbca41f347997e5 d4573c3e1c992668f5dcd57d1c 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
   5132876 ±  0%      +2.0%    5236374 ±  0%  unixbench.time.involuntary_context_switches
      1571 ±  0%      +1.2%       1591 ±  0%  unixbench.time.system_time
    795.19 ±  0%      +1.3%     805.33 ±  0%  unixbench.time.user_time
    932469 ±  0%     -12.7%     813874 ±  0%  softirqs.SCHED
     18660 ±  0%      +1.7%      18975 ±  0%  vmstat.system.in
  15466776 ±  1%      -5.1%   14675901 ±  0%  latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.stub_execve
   5322533 ±  1%      -8.5%    4868555 ±  1%  latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
  21589512 ±  1%      -7.4%   19993473 ±  1%  latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.system_call_fastpath
   9791019 ±  1%      -8.2%    8988996 ±  1%  latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
 4.135e+08 ±  0%      -3.0%  4.012e+08 ±  0%  latency_stats.sum.sigsuspend.SyS_rt_sigsuspend.system_call_fastpath
     24904 ±  4%     -25.2%      18617 ± 11%  sched_debug.cfs_rq[0]:/.blocked_load_avg
      8.75 ± 58%    +508.6%      53.25 ±106%  sched_debug.cfs_rq[0]:/.load
    391160 ±  1%     -11.4%     346400 ±  1%  sched_debug.cfs_rq[0]:/.tg_load_avg
     25100 ±  4%     -25.2%      18770 ± 11%  sched_debug.cfs_rq[0]:/.tg_load_contrib
    390600 ±  1%     -11.1%     347123 ±  2%  sched_debug.cfs_rq[10]:/.tg_load_avg
    391242 ±  1%     -11.7%     345453 ±  1%  sched_debug.cfs_rq[11]:/.tg_load_avg
    390931 ±  1%     -11.5%     345958 ±  1%  sched_debug.cfs_rq[12]:/.tg_load_avg
     23834 ±  5%     -17.3%      19701 ± 11%  sched_debug.cfs_rq[13]:/.blocked_load_avg
    390601 ±  1%     -11.7%     345060 ±  2%  sched_debug.cfs_rq[13]:/.tg_load_avg
     24080 ±  5%     -17.7%      19823 ± 11%  sched_debug.cfs_rq[13]:/.tg_load_contrib
    390376 ±  1%     -11.5%     345360 ±  2%  sched_debug.cfs_rq[14]:/.tg_load_avg
     23210 ±  5%     -12.3%      20347 ±  4%  sched_debug.cfs_rq[15]:/.blocked_load_avg
    390137 ±  0%     -11.4%     345480 ±  2%  sched_debug.cfs_rq[15]:/.tg_load_avg
     23345 ±  5%     -12.4%      20458 ±  4%  sched_debug.cfs_rq[15]:/.tg_load_contrib
    390654 ±  1%     -11.4%     345993 ±  2%  sched_debug.cfs_rq[1]:/.tg_load_avg
     24857 ±  5%     -16.0%      20876 ± 19%  sched_debug.cfs_rq[1]:/.tg_load_contrib
    391340 ±  1%     -11.6%     346023 ±  2%  sched_debug.cfs_rq[2]:/.tg_load_avg
     26275 ±  5%     -15.6%      22180 ± 19%  sched_debug.cfs_rq[3]:/.blocked_load_avg
    391666 ±  1%     -11.7%     345867 ±  2%  sched_debug.cfs_rq[3]:/.tg_load_avg
     26369 ±  5%     -15.5%      22271 ± 19%  sched_debug.cfs_rq[3]:/.tg_load_contrib
    391430 ±  1%     -11.5%     346427 ±  2%  sched_debug.cfs_rq[4]:/.tg_load_avg
    391321 ±  1%     -11.5%     346235 ±  2%  sched_debug.cfs_rq[5]:/.tg_load_avg
     25744 ±  3%     -17.8%      21156 ± 14%  sched_debug.cfs_rq[6]:/.blocked_load_avg
      1.00 ± 70%    +225.0%       3.25 ± 25%  sched_debug.cfs_rq[6]:/.nr_spread_over
    389932 ±  1%     -11.1%     346764 ±  2%  sched_debug.cfs_rq[6]:/.tg_load_avg
     25873 ±  3%     -17.6%      21329 ± 14%  sched_debug.cfs_rq[6]:/.tg_load_contrib
    389907 ±  1%     -11.0%     346962 ±  2%  sched_debug.cfs_rq[7]:/.tg_load_avg
     23576 ±  4%     -20.0%      18853 ± 10%  sched_debug.cfs_rq[8]:/.blocked_load_avg
    390564 ±  1%     -11.1%     347109 ±  2%  sched_debug.cfs_rq[8]:/.tg_load_avg
     23775 ±  3%     -20.3%      18937 ± 10%  sched_debug.cfs_rq[8]:/.tg_load_contrib
    391152 ±  1%     -11.2%     347502 ±  2%  sched_debug.cfs_rq[9]:/.tg_load_avg
    450623 ± 16%     +28.9%     580722 ±  4%  sched_debug.cpu#0.avg_idle
     29.75 ± 66%     -74.8%       7.50 ± 56%  sched_debug.cpu#12.load
    540.50 ±  6%     -15.5%     456.75 ±  3%  sched_debug.cpu#14.nr_uninterruptible
     88.25 ± 72%     -83.9%      14.25 ± 27%  sched_debug.cpu#2.cpu_load[0]
     61.00 ± 50%     -69.3%      18.75 ± 24%  sched_debug.cpu#2.cpu_load[1]
     46.00 ± 26%     -37.5%      28.75 ± 18%  sched_debug.cpu#3.cpu_load[2]
     30.00 ± 21%     +43.3%      43.00 ± 31%  sched_debug.cpu#6.cpu_load[2]
     21361 ± 16%     +72.5%      36844 ± 39%  sched_debug.cpu#8.curr->pid


nhm-white: Nehalem
Memory: 6G

lituya: Grantley Haswell
Memory: 16G


To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang

View attachment "job.yaml" of type "text/plain" (2970 bytes)

View attachment "reproduce" of type "text/plain" (1187 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ