lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20160308070358.GC10475@yexl-desktop>
Date:	Tue, 8 Mar 2016 15:03:58 +0800
From:	kernel test robot <xiaolong.ye@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...nel.org>,
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Jiri Olsa <jolsa@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp] [perf] bd2afa49d1: -20.7% will-it-scale.per_thread_ops

FYI, we noticed the will-it-scale.per_thread_ops -20.7% regression on

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit bd2afa49d194c6412c333e9fdd48bc5d06bb465d ("perf: Fix scaling vs. perf_event_enable()")


=========================================================================================
compiler/kconfig/rootfs/tbox_group/test/testcase:
  gcc-4.9/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-sb03/lseek1/will-it-scale

commit: 
  7fce250915efca0f8f51dddee3ae89bf30d86ca5
  bd2afa49d194c6412c333e9fdd48bc5d06bb465d

7fce250915efca0f bd2afa49d194c6412c333e9fdd 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
   7989217 ±  0%     -20.7%    6333122 ±  0%  will-it-scale.per_thread_ops
      0.61 ±  0%      -1.3%       0.60 ±  0%  will-it-scale.scalability
      1762 ±  0%      -4.7%       1680 ±  0%  will-it-scale.time.system_time
    748.10 ±  0%     +11.0%     830.15 ±  0%  will-it-scale.time.user_time
    748.10 ±  0%     +11.0%     830.15 ±  0%  time.user_time
      0.03 ± 40%     -66.7%       0.01 ±-10000%  turbostat.Pkg%pc6
      1.54 ±  1%     -16.4%       1.28 ±  3%  perf-profile.cycles.___might_sleep.__might_sleep.mutex_lock.__fdget_pos.sys_lseek
     33.95 ±  0%     -15.3%      28.77 ±  0%  perf-profile.cycles.__fdget_pos.sys_lseek.entry_SYSCALL_64_fastpath
     14.29 ±  2%     -18.4%      11.67 ±  1%  perf-profile.cycles.__fget.__fget_light.__fdget_pos.sys_lseek.entry_SYSCALL_64_fastpath
     14.66 ±  2%     -18.1%      12.02 ±  0%  perf-profile.cycles.__fget_light.__fdget_pos.sys_lseek.entry_SYSCALL_64_fastpath
      3.38 ±  1%     -14.1%       2.90 ±  3%  perf-profile.cycles.__might_sleep.mutex_lock.__fdget_pos.sys_lseek.entry_SYSCALL_64_fastpath
     12.52 ±  1%     -10.9%      11.16 ±  0%  perf-profile.cycles.entry_SYSCALL_64
     12.72 ±  1%     -17.8%      10.45 ±  1%  perf-profile.cycles.entry_SYSCALL_64_after_swapgs
     10.29 ±  2%     -17.7%       8.47 ±  1%  perf-profile.cycles.fput.entry_SYSCALL_64_fastpath
      1.94 ±  5%    +152.2%       4.89 ±  2%  perf-profile.cycles.generic_file_llseek_size.shmem_file_llseek.sys_lseek.entry_SYSCALL_64_fastpath
      0.75 ±  5%    +517.3%       4.65 ±  3%  perf-profile.cycles.generic_file_llseek_size.sys_lseek.entry_SYSCALL_64_fastpath
     11.77 ±  2%     -13.5%      10.18 ±  1%  perf-profile.cycles.mutex_lock.__fdget_pos.sys_lseek.entry_SYSCALL_64_fastpath
      4.59 ±  3%    +180.3%      12.87 ±  4%  perf-profile.cycles.shmem_file_llseek.sys_lseek.entry_SYSCALL_64_fastpath
     47.83 ±  1%     +12.6%      53.87 ±  0%  perf-profile.cycles.sys_lseek.entry_SYSCALL_64_fastpath
     35661 ±  4%     +26.3%      45032 ± 18%  sched_debug.cfs_rq:/.exec_clock.24
     34816 ±  0%     +14.1%      39725 ± 14%  sched_debug.cfs_rq:/.exec_clock.25
     33735 ±  4%     -31.2%      23222 ± 37%  sched_debug.cfs_rq:/.exec_clock.8
    804888 ±  3%     +25.5%    1010094 ± 15%  sched_debug.cfs_rq:/.min_vruntime.24
    709991 ±  3%     -30.1%     496149 ± 31%  sched_debug.cfs_rq:/.min_vruntime.min
      4.25 ± 13%     -24.5%       3.21 ±  2%  sched_debug.cfs_rq:/.nr_spread_over.max
      1.17 ±  8%     -19.6%       0.94 ±  5%  sched_debug.cfs_rq:/.nr_spread_over.stddev
   -585558 ± -4%     +28.1%    -750273 ±-13%  sched_debug.cfs_rq:/.spread0.8
   -595168 ± -3%     +29.4%    -770251 ±-14%  sched_debug.cfs_rq:/.spread0.min
    561.50 ±  4%      -6.6%     524.25 ±  4%  sched_debug.cfs_rq:/.util_avg.12
    411.50 ± 14%     -13.9%     354.50 ±  1%  sched_debug.cfs_rq:/.util_avg.23
    173.25 ±  1%     +12.8%     195.50 ± 10%  sched_debug.cfs_rq:/.util_avg.26
     53711 ±  4%     +16.0%      62301 ±  9%  sched_debug.cpu.nr_load_updates.24
     10450 ± 12%     -22.0%       8153 ± 12%  sched_debug.cpu.nr_switches.0
     14829 ± 16%     +32.3%      19618 ± 18%  sched_debug.cpu.nr_switches.11
      7613 ± 30%     -59.2%       3109 ± 11%  sched_debug.cpu.nr_switches.29
     33333 ±  5%     +47.8%      49256 ± 22%  sched_debug.cpu.nr_switches.max
      8197 ±  3%     +32.3%      10843 ± 12%  sched_debug.cpu.nr_switches.stddev
      1.00 ±187%    -600.0%      -5.00 ±-54%  sched_debug.cpu.nr_uninterruptible.11
     16912 ± 19%    +203.2%      51277 ± 64%  sched_debug.cpu.sched_count.10
      7914 ± 31%     -60.3%       3144 ± 11%  sched_debug.cpu.sched_count.29
      3798 ± 39%     +57.9%       5998 ± 30%  sched_debug.cpu.sched_count.3
      4658 ±  9%     +90.7%       8886 ± 67%  sched_debug.cpu.sched_goidle.14
      3051 ±  2%     +11.2%       3394 ±  4%  sched_debug.cpu.sched_goidle.avg
     13064 ±  9%     +62.4%      21213 ± 29%  sched_debug.cpu.sched_goidle.max
      3266 ±  4%     +41.9%       4634 ± 15%  sched_debug.cpu.sched_goidle.stddev
      6316 ± 13%     -41.4%       3703 ±  7%  sched_debug.cpu.ttwu_count.14
     17502 ± 10%     +82.7%      31976 ± 15%  sched_debug.cpu.ttwu_count.max
      4063 ±  5%     +53.3%       6230 ± 11%  sched_debug.cpu.ttwu_count.stddev
      1733 ± 31%     +49.4%       2591 ± 13%  sched_debug.cpu.ttwu_local.11
    899.00 ± 29%     +76.7%       1588 ± 22%  sched_debug.cpu.ttwu_local.27

=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
  gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-sbx04/lseek1/will-it-scale

commit: 
  7fce250915efca0f8f51dddee3ae89bf30d86ca5
  bd2afa49d194c6412c333e9fdd48bc5d06bb465d

7fce250915efca0f bd2afa49d194c6412c333e9fdd 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
   7619716 ±  0%     -21.2%    6008133 ±  0%  will-it-scale.per_thread_ops
      0.60 ±  0%      -2.6%       0.58 ±  0%  will-it-scale.scalability
      2982 ±  0%      -4.8%       2838 ±  0%  will-it-scale.time.system_time
      1262 ±  0%     +11.4%       1406 ±  0%  will-it-scale.time.user_time
      1262 ±  0%     +11.4%       1406 ±  0%  time.user_time
     21294 ±  1%      +9.4%      23298 ±  2%  slabinfo.anon_vma_chain.active_objs
     21294 ±  1%      +9.4%      23298 ±  2%  slabinfo.anon_vma_chain.num_objs
     13408 ±  1%     -14.4%      11482 ±  3%  slabinfo.kmalloc-512.active_objs
     13547 ±  1%     -12.9%      11804 ±  3%  slabinfo.kmalloc-512.num_objs
      2.00 ±  0%    +212.5%       6.25 ± 34%  sched_debug.cfs_rq:/.load.27
      2.50 ± 20%   +1650.0%      43.75 ± 84%  sched_debug.cfs_rq:/.load_avg.18
     11.50 ±  4%     -10.9%      10.25 ±  4%  sched_debug.cfs_rq:/.load_avg.46
   1133547 ± 15%     -14.8%     965376 ±  0%  sched_debug.cfs_rq:/.min_vruntime.57
      2.00 ±-50%     +12.5%       2.25 ± 57%  sched_debug.cfs_rq:/.nr_spread_over.9
      5.54 ± 29%     -33.1%       3.71 ± 30%  sched_debug.cfs_rq:/.nr_spread_over.max
      0.98 ±  8%     -14.8%       0.83 ±  7%  sched_debug.cfs_rq:/.nr_spread_over.stddev
      3.00 ±  0%     -33.3%       2.00 ±  0%  sched_debug.cfs_rq:/.runnable_load_avg.31
    172.50 ±  1%     +15.2%     198.75 ± 14%  sched_debug.cfs_rq:/.util_avg.21
    176.00 ±  0%     +44.5%     254.25 ± 28%  sched_debug.cfs_rq:/.util_avg.26
     17.60 ±  5%      -8.5%      16.11 ±  6%  sched_debug.cpu.cpu_load[3].stddev
      1792 ±  0%     +27.7%       2289 ± 31%  sched_debug.cpu.curr->pid.16
      1793 ±  0%     +25.4%       2248 ± 28%  sched_debug.cpu.curr->pid.21
     48688 ±  3%      +6.1%      51640 ±  3%  sched_debug.cpu.nr_load_updates.63
     14390 ± 25%     -70.2%       4282 ± 52%  sched_debug.cpu.nr_switches.23
      1180 ± 10%     +19.8%       1414 ± 15%  sched_debug.cpu.nr_switches.47
      3051 ± 55%     -57.1%       1309 ±  6%  sched_debug.cpu.nr_switches.50
      1604 ± 46%    +111.0%       3385 ± 60%  sched_debug.cpu.nr_switches.52
      2296 ± 37%    +103.1%       4664 ± 32%  sched_debug.cpu.nr_switches.59
      2199 ± 20%     +95.1%       4292 ± 39%  sched_debug.cpu.nr_switches.60
      1960 ± 17%     +49.9%       2939 ± 14%  sched_debug.cpu.nr_switches.63
    -18.75 ±-19%     +62.7%     -30.50 ±-21%  sched_debug.cpu.nr_uninterruptible.0
     -2.00 ±-86%    -162.5%       1.25 ±103%  sched_debug.cpu.nr_uninterruptible.19
     -1.75 ±-123%    -171.4%       1.25 ±103%  sched_debug.cpu.nr_uninterruptible.27
      1.25 ±153%    +360.0%       5.75 ± 35%  sched_debug.cpu.nr_uninterruptible.32
      3.50 ± 84%     -71.4%       1.00 ±254%  sched_debug.cpu.nr_uninterruptible.46
    -18.50 ±-18%     +62.6%     -30.08 ±-21%  sched_debug.cpu.nr_uninterruptible.min
   4732118 ±  0%     +27.7%    6043638 ±  0%  sched_debug.cpu.sched_count.0
     16580 ±100%     -73.9%       4329 ± 47%  sched_debug.cpu.sched_count.1
     14410 ± 24%     -70.2%       4297 ± 52%  sched_debug.cpu.sched_count.23
     11351 ± 43%    +159.5%      29460 ± 46%  sched_debug.cpu.sched_count.27
      5346 ± 55%     -50.1%       2669 ± 35%  sched_debug.cpu.sched_count.4
      3061 ± 55%     -56.9%       1320 ±  5%  sched_debug.cpu.sched_count.50
      1614 ± 46%    +110.6%       3399 ± 60%  sched_debug.cpu.sched_count.52
      2035 ±  8%     +59.1%       3238 ± 11%  sched_debug.cpu.sched_count.57
      2310 ± 37%    +105.8%       4754 ± 33%  sched_debug.cpu.sched_count.59
      2218 ± 20%    +122.3%       4931 ± 36%  sched_debug.cpu.sched_count.60
      1973 ± 17%     +91.8%       3785 ± 37%  sched_debug.cpu.sched_count.63
     80242 ±  0%     +26.0%     101102 ±  0%  sched_debug.cpu.sched_count.avg
   4732118 ±  0%     +27.7%    6043638 ±  0%  sched_debug.cpu.sched_count.max
    586184 ±  0%     +27.7%     748781 ±  0%  sched_debug.cpu.sched_count.stddev
      4320 ± 46%     -64.8%       1520 ±  9%  sched_debug.cpu.sched_goidle.20
      2161 ± 73%     -53.1%       1012 ± 22%  sched_debug.cpu.sched_goidle.4
    622.00 ± 25%     -19.3%     502.25 ± 20%  sched_debug.cpu.sched_goidle.44
      1063 ± 38%     -53.4%     496.00 ±  7%  sched_debug.cpu.sched_goidle.50
    480.50 ± 10%     +39.0%     667.75 ± 33%  sched_debug.cpu.sched_goidle.53
    925.50 ± 43%    +115.9%       1998 ± 39%  sched_debug.cpu.sched_goidle.59
    754.75 ±  6%     +59.7%       1205 ± 14%  sched_debug.cpu.sched_goidle.63
      8103 ±  6%     -11.0%       7214 ±  6%  sched_debug.cpu.ttwu_count.0
      1831 ± 23%     +69.1%       3095 ± 10%  sched_debug.cpu.ttwu_count.15
      2932 ± 23%     -47.1%       1550 ± 22%  sched_debug.cpu.ttwu_count.18
      4208 ± 27%     -59.6%       1702 ± 20%  sched_debug.cpu.ttwu_count.19
      4212 ± 39%     -69.9%       1268 ± 24%  sched_debug.cpu.ttwu_count.23
      5265 ± 45%     -44.9%       2901 ± 16%  sched_debug.cpu.ttwu_count.27
      3700 ± 24%     +71.1%       6331 ± 48%  sched_debug.cpu.ttwu_count.28
      2916 ± 90%     -65.1%       1017 ± 14%  sched_debug.cpu.ttwu_count.4
      2347 ± 50%     -62.8%     874.25 ± 77%  sched_debug.cpu.ttwu_count.44
      5181 ± 49%     -78.6%       1107 ± 66%  sched_debug.cpu.ttwu_count.46
      1745 ± 28%    +174.4%       4789 ± 27%  sched_debug.cpu.ttwu_count.9
      1182 ± 79%     -63.6%     430.00 ± 13%  sched_debug.cpu.ttwu_local.18
      2470 ± 56%     -78.6%     529.25 ± 30%  sched_debug.cpu.ttwu_local.19
      3327 ± 46%     -86.4%     452.25 ± 33%  sched_debug.cpu.ttwu_local.23
      1156 ± 40%     +66.5%       1925 ± 33%  sched_debug.cpu.ttwu_local.24
      1589 ±  9%     -28.6%       1134 ± 10%  sched_debug.cpu.ttwu_local.27
    349.50 ± 27%     -34.5%     229.00 ± 40%  sched_debug.cpu.ttwu_local.33
    126.50 ± 15%     +43.1%     181.00 ± 18%  sched_debug.cpu.ttwu_local.38
      1126 ± 62%     -62.2%     425.75 ± 34%  sched_debug.cpu.ttwu_local.4
    381.00 ± 21%    +145.7%     936.00 ± 35%  sched_debug.cpu.ttwu_local.59


lkp-sb03: Sandy Bridge-EP
Memory: 64G

lkp-sbx04: Sandy Bridge-EX
Memory: 64G

		    will-it-scale.per_thread_ops

9e+06 ++------------------------------------------------------------------+
*.  .*.*.*.                                        .*.*             |
8e+06 ++*.       *..*.*.*.*..*.*.*.*..*.*.*..*.*.*.*..*.*   :    *.*.*..*.*
7e+06 O+O  O O O O  O O O O  O                              :    :        |
|                        O O O  O                      :   :        |
6e+06 ++                                                     :   :        |
5e+06 ++                                                     :  :         |
|                                                      :  :         |
4e+06 ++                                                      : :         |
3e+06 ++                                                      : :         |
|                                                       : :         |
2e+06 ++                                                      : :         |
1e+06 ++                                                       :          |
|                                                        :          |
0 ++-------------------------------------------------------*----------+


                          will-it-scale.time.system_time

  1800 *+*--*-*-*--*-*-*-*--*-*-*--*-*-*--*-*-*-*--*-*-*--*-*-*----*-*-*--*-*
       O O  O O O  O O O O  O O O  O O O                      :    :        |
  1600 ++                                                     :    :        |
  1400 ++                                                      :   :        |
       |                                                       :  :         |
  1200 ++                                                      :  :         |
  1000 ++                                                      :  :         |
       |                                                        : :         |
   800 ++                                                       : :         |
   600 ++                                                       : :         |
       |                                                        : :         |
   400 ++                                                       ::          |
   200 ++                                                        :          |
       |                                                         :          |
     0 ++--------------------------------------------------------*----------+

	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong.

View attachment "job.yaml" of type "text/plain" (3547 bytes)

View attachment "reproduce" of type "text/plain" (62 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ