lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:	Mon, 12 Oct 2015 14:44:27 +0800
From:	kernel test robot <ying.huang@...ux.intel.com>
TO:	Josef Bacik <jbacik@...com>
CC:	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [lkp] [tmpfs] afa2db2fb6: -14.5% aim9.creat-clo.ops_per_sec

FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit afa2db2fb6f15f860069de94a1257db57589fe95 ("tmpfs: truncate prealloc blocks past i_size")


=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/testtime/test:
  lkp-wsx02/aim9/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/300s/creat-clo

commit: 
  c435a390574d012f8d30074135d8fcc6f480b484
  afa2db2fb6f15f860069de94a1257db57589fe95

c435a390574d012f afa2db2fb6f15f860069de94a1 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    563108 ±  0%     -14.5%     481585 ±  6%  aim9.creat-clo.ops_per_sec
     13485 ±  9%     -17.2%      11162 ±  8%  numa-meminfo.node0.SReclaimable
      9.21 ±  4%     -11.7%       8.13 ±  1%  time.user_time
      2.04 ± 10%     -19.6%       1.64 ± 14%  turbostat.CPU%c1
  11667682 ± 96%     -96.0%     463268 ±104%  cpuidle.C1E-NHM.time
      2401 ±  3%     -38.8%       1470 ± 27%  cpuidle.C3-NHM.usage
      2.25 ± 48%    +166.7%       6.00 ± 20%  numa-numastat.node2.other_node
      4.75 ± 68%    +126.3%      10.75 ± 34%  numa-numastat.node3.other_node
      3370 ±  9%     -17.2%       2790 ±  8%  numa-vmstat.node0.nr_slab_reclaimable
     15.00 ±101%    +338.3%      65.75 ± 69%  numa-vmstat.node1.nr_dirtied
     14.33 ±108%    +357.0%      65.50 ± 68%  numa-vmstat.node1.nr_written
     43359 ±  0%     -50.6%      21399 ± 58%  numa-vmstat.node2.numa_other
   5522042 ±  0%     -11.8%    4871759 ±  5%  proc-vmstat.numa_hit
   5522030 ±  0%     -11.8%    4871736 ±  5%  proc-vmstat.numa_local
  10381338 ±  0%     -16.1%    8713670 ±  5%  proc-vmstat.pgalloc_normal
  10403821 ±  0%     -12.5%    9099427 ±  5%  proc-vmstat.pgfree
      1101 ±  5%     -15.9%     926.25 ± 12%  slabinfo.blkdev_ioc.active_objs
      1101 ±  5%     -15.9%     926.25 ± 12%  slabinfo.blkdev_ioc.num_objs
      1058 ±  3%     -12.1%     930.75 ±  8%  slabinfo.file_lock_ctx.active_objs
      1058 ±  3%     -12.1%     930.75 ±  8%  slabinfo.file_lock_ctx.num_objs
    872.38 ± 56%     -46.5%     467.10 ± 70%  sched_debug.cfs_rq[11]:/.exec_clock
    530.50 ± 22%    +225.1%       1724 ± 52%  sched_debug.cfs_rq[13]:/.avg->runnable_avg_sum
     11.00 ± 23%    +234.1%      36.75 ± 54%  sched_debug.cfs_rq[13]:/.tg_runnable_contrib
    675.18 ± 30%    +492.9%       4003 ± 61%  sched_debug.cfs_rq[1]:/.exec_clock
      3045 ± 37%    +420.7%      15858 ± 73%  sched_debug.cfs_rq[1]:/.min_vruntime
      5240 ± 48%     -56.8%       2264 ± 35%  sched_debug.cfs_rq[22]:/.min_vruntime
      5424 ± 93%     -93.7%     339.50 ± 70%  sched_debug.cfs_rq[23]:/.avg->runnable_avg_sum
    117.00 ± 94%     -94.4%       6.50 ± 79%  sched_debug.cfs_rq[23]:/.tg_runnable_contrib
    337.21 ± 15%     -40.1%     201.92 ± 41%  sched_debug.cfs_rq[24]:/.exec_clock
    199.07 ± 78%    +241.7%     680.17 ± 50%  sched_debug.cfs_rq[25]:/.exec_clock
    367.50 ± 12%     -37.2%     230.75 ± 24%  sched_debug.cfs_rq[27]:/.avg->runnable_avg_sum
      7.00 ± 17%     -39.3%       4.25 ± 30%  sched_debug.cfs_rq[27]:/.tg_runnable_contrib
    326.96 ± 15%     -42.6%     187.64 ± 47%  sched_debug.cfs_rq[28]:/.exec_clock
    200.71 ± 88%   +1505.4%       3222 ± 75%  sched_debug.cfs_rq[29]:/.exec_clock
      3240 ± 20%     +72.0%       5574 ± 23%  sched_debug.cfs_rq[31]:/.min_vruntime
     97.47 ± 42%    +891.3%     966.27 ± 53%  sched_debug.cfs_rq[37]:/.exec_clock
      1403 ± 55%    +246.3%       4858 ± 53%  sched_debug.cfs_rq[37]:/.min_vruntime
      1461 ± 50%    +143.7%       3562 ± 52%  sched_debug.cfs_rq[41]:/.min_vruntime
    184.00 ± 46%    +671.9%       1420 ± 57%  sched_debug.cfs_rq[42]:/.avg->runnable_avg_sum
      3.25 ± 66%    +823.1%      30.00 ± 59%  sched_debug.cfs_rq[42]:/.tg_runnable_contrib
     69.67 ± 57%    +310.2%     285.75 ± 60%  sched_debug.cfs_rq[46]:/.blocked_load_avg
     69.67 ± 57%    +310.2%     285.75 ± 60%  sched_debug.cfs_rq[46]:/.tg_load_contrib
    107.61 ± 51%    +155.0%     274.41 ± 13%  sched_debug.cfs_rq[49]:/.exec_clock
      3332 ± 40%     -85.4%     487.59 ± 87%  sched_debug.cfs_rq[4]:/.exec_clock
     16.00 ±104%   +1359.4%     233.50 ± 81%  sched_debug.cfs_rq[53]:/.blocked_load_avg
     16.00 ±104%   +1360.9%     233.75 ± 81%  sched_debug.cfs_rq[53]:/.tg_load_contrib
      2502 ± 21%     +74.1%       4357 ± 22%  sched_debug.cfs_rq[5]:/.min_vruntime
    308.22 ± 17%     -53.7%     142.65 ± 64%  sched_debug.cfs_rq[60]:/.exec_clock
     91.55 ± 65%    +530.7%     577.43 ± 93%  sched_debug.cfs_rq[61]:/.exec_clock
      1023 ± 55%    +205.9%       3130 ± 47%  sched_debug.cfs_rq[61]:/.min_vruntime
     10369 ±  2%     -14.2%       8892 ±  6%  sched_debug.cfs_rq[63]:/.tg_load_avg
      2143 ±  6%     -11.1%       1905 ±  7%  sched_debug.cfs_rq[64]:/.tg->runnable_avg
     10383 ±  2%     -15.9%       8727 ±  4%  sched_debug.cfs_rq[64]:/.tg_load_avg
     76765 ± 94%     -98.9%     872.14 ± 62%  sched_debug.cfs_rq[65]:/.exec_clock
      2142 ±  6%     -11.1%       1905 ±  7%  sched_debug.cfs_rq[65]:/.tg->runnable_avg
     10306 ±  3%     -16.6%       8596 ±  6%  sched_debug.cfs_rq[65]:/.tg_load_avg
      2144 ±  6%     -10.9%       1912 ±  7%  sched_debug.cfs_rq[66]:/.tg->runnable_avg
     10312 ±  3%     -16.6%       8599 ±  6%  sched_debug.cfs_rq[66]:/.tg_load_avg
      2151 ±  6%     -11.1%       1913 ±  7%  sched_debug.cfs_rq[67]:/.tg->runnable_avg
     10302 ±  3%     -16.8%       8568 ±  7%  sched_debug.cfs_rq[67]:/.tg_load_avg
      2150 ±  6%     -10.8%       1917 ±  7%  sched_debug.cfs_rq[68]:/.tg->runnable_avg
     10242 ±  3%     -16.9%       8516 ±  7%  sched_debug.cfs_rq[68]:/.tg_load_avg
      2152 ±  6%     -11.2%       1911 ±  6%  sched_debug.cfs_rq[69]:/.tg->runnable_avg
     10201 ±  3%     -17.4%       8430 ±  7%  sched_debug.cfs_rq[69]:/.tg_load_avg
      2154 ±  6%     -11.3%       1910 ±  6%  sched_debug.cfs_rq[70]:/.tg->runnable_avg
     10132 ±  4%     -17.3%       8379 ±  7%  sched_debug.cfs_rq[70]:/.tg_load_avg
      2159 ±  5%     -11.3%       1914 ±  6%  sched_debug.cfs_rq[71]:/.tg->runnable_avg
     10119 ±  4%     -16.9%       8411 ±  7%  sched_debug.cfs_rq[71]:/.tg_load_avg
    251.79 ± 15%     -37.9%     156.44 ± 37%  sched_debug.cfs_rq[72]:/.exec_clock
      2161 ±  5%     -11.2%       1919 ±  6%  sched_debug.cfs_rq[72]:/.tg->runnable_avg
     10119 ±  4%     -16.7%       8429 ±  7%  sched_debug.cfs_rq[72]:/.tg_load_avg
      2123 ± 48%     +76.5%       3748 ± 22%  sched_debug.cfs_rq[73]:/.min_vruntime
      2164 ±  5%     -11.5%       1916 ±  6%  sched_debug.cfs_rq[73]:/.tg->runnable_avg
     10167 ±  4%     -17.5%       8389 ±  8%  sched_debug.cfs_rq[73]:/.tg_load_avg
      2816 ± 62%     -60.7%       1106 ± 47%  sched_debug.cfs_rq[74]:/.min_vruntime
      2169 ±  5%     -11.5%       1921 ±  6%  sched_debug.cfs_rq[74]:/.tg->runnable_avg
     10166 ±  3%     -17.5%       8388 ±  8%  sched_debug.cfs_rq[74]:/.tg_load_avg
      2167 ±  6%     -11.5%       1918 ±  6%  sched_debug.cfs_rq[75]:/.tg->runnable_avg
     10141 ±  3%     -18.1%       8304 ±  7%  sched_debug.cfs_rq[75]:/.tg_load_avg
      2165 ±  6%     -11.6%       1915 ±  6%  sched_debug.cfs_rq[76]:/.tg->runnable_avg
     10115 ±  3%     -18.3%       8261 ±  7%  sched_debug.cfs_rq[76]:/.tg_load_avg
    164.34 ± 26%     +61.6%     265.63 ± 31%  sched_debug.cfs_rq[77]:/.exec_clock
      1944 ± 19%     +92.2%       3736 ± 44%  sched_debug.cfs_rq[77]:/.min_vruntime
      2165 ±  6%     -11.5%       1917 ±  7%  sched_debug.cfs_rq[77]:/.tg->runnable_avg
      9935 ±  2%     -17.0%       8243 ±  7%  sched_debug.cfs_rq[77]:/.tg_load_avg
      2169 ±  6%     -11.5%       1920 ±  6%  sched_debug.cfs_rq[78]:/.tg->runnable_avg
      9924 ±  2%     -16.6%       8276 ±  7%  sched_debug.cfs_rq[78]:/.tg_load_avg
      2170 ±  6%     -11.3%       1924 ±  6%  sched_debug.cfs_rq[79]:/.tg->runnable_avg
      9901 ±  3%     -16.2%       8301 ±  7%  sched_debug.cfs_rq[79]:/.tg_load_avg
      3130 ± 24%     +84.8%       5784 ± 22%  sched_debug.cfs_rq[7]:/.min_vruntime
     54.00 ±155%    +502.8%     325.50 ± 71%  sched_debug.cfs_rq[8]:/.blocked_load_avg
      0.75 ±110%    +233.3%       2.50 ± 20%  sched_debug.cfs_rq[8]:/.nr_spread_over
     54.00 ±155%    +510.6%     329.75 ± 70%  sched_debug.cfs_rq[8]:/.tg_load_contrib
    463.50 ± 17%    +284.5%       1782 ± 23%  sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
      9.25 ± 15%    +313.5%      38.25 ± 25%  sched_debug.cfs_rq[9]:/.tg_runnable_contrib
     10937 ± 12%     +33.6%      14607 ± 12%  sched_debug.cpu#1.nr_load_updates
      7725 ±  6%     +24.0%       9575 ± 10%  sched_debug.cpu#13.nr_load_updates
      1854 ± 67%    +752.3%      15802 ± 63%  sched_debug.cpu#13.nr_switches
      2061 ± 54%    +672.2%      15918 ± 64%  sched_debug.cpu#13.sched_count
    872.75 ± 67%    +785.0%       7723 ± 63%  sched_debug.cpu#13.sched_goidle
    277.25 ± 34%    +315.5%       1152 ± 51%  sched_debug.cpu#13.ttwu_local
      4484 ±114%    +270.1%      16600 ± 96%  sched_debug.cpu#21.nr_switches
      2158 ±115%    +280.1%       8205 ± 97%  sched_debug.cpu#21.sched_goidle
      7863 ±  9%     +20.8%       9497 ± 13%  sched_debug.cpu#25.nr_load_updates
      7848 ± 15%     +59.2%      12495 ± 24%  sched_debug.cpu#29.nr_load_updates
      3109 ±103%    +326.7%      13267 ± 75%  sched_debug.cpu#29.nr_switches
      3510 ± 85%    +288.3%      13631 ± 71%  sched_debug.cpu#29.sched_count
      1502 ±105%    +280.4%       5714 ± 75%  sched_debug.cpu#29.sched_goidle
      1473 ± 97%    +293.8%       5803 ± 54%  sched_debug.cpu#29.ttwu_count
    708.25 ±119%    +352.6%       3205 ± 65%  sched_debug.cpu#29.ttwu_local
      2741 ± 40%     -50.6%       1353 ± 35%  sched_debug.cpu#32.nr_switches
      2747 ± 40%     -50.5%       1358 ± 35%  sched_debug.cpu#32.sched_count
      1285 ± 43%     -53.4%     598.50 ± 34%  sched_debug.cpu#32.sched_goidle
      6713 ±  2%     +10.3%       7406 ±  2%  sched_debug.cpu#37.nr_load_updates
    701.00 ± 54%    +589.1%       4830 ± 52%  sched_debug.cpu#37.nr_switches
    707.50 ± 54%    +583.7%       4837 ± 52%  sched_debug.cpu#37.sched_count
    306.50 ± 55%    +650.9%       2301 ± 55%  sched_debug.cpu#37.sched_goidle
    292.25 ± 64%    +743.1%       2464 ±112%  sched_debug.cpu#37.ttwu_count
    178.00 ± 61%    +125.3%     401.00 ±  4%  sched_debug.cpu#37.ttwu_local
     17407 ± 65%     -65.6%       5986 ± 43%  sched_debug.cpu#4.nr_switches
    406.25 ± 81%   +1106.8%       4902 ± 98%  sched_debug.cpu#41.ttwu_count
    179.25 ± 57%    +119.2%     393.00 ± 11%  sched_debug.cpu#41.ttwu_local
      3.50 ± 14%     -35.7%       2.25 ± 19%  sched_debug.cpu#42.nr_uninterruptible
      2593 ± 74%     -55.7%       1148 ± 32%  sched_debug.cpu#47.nr_switches
      2599 ± 74%     -55.6%       1153 ± 32%  sched_debug.cpu#47.sched_count
    766.00 ± 53%    +551.0%       4986 ± 76%  sched_debug.cpu#49.nr_switches
    344.00 ± 52%    +603.2%       2419 ± 78%  sched_debug.cpu#49.sched_goidle
    693.75 ±133%    +671.1%       5349 ± 96%  sched_debug.cpu#49.ttwu_count
      8119 ±  7%     +23.0%       9984 ± 16%  sched_debug.cpu#5.nr_load_updates
      2417 ± 52%    +516.7%      14908 ± 68%  sched_debug.cpu#5.nr_switches
      2712 ± 40%    +463.7%      15290 ± 64%  sched_debug.cpu#5.sched_count
      1116 ± 55%    +557.1%       7336 ± 69%  sched_debug.cpu#5.sched_goidle
    714.75 ± 53%    +124.5%       1604 ± 19%  sched_debug.cpu#57.nr_switches
    720.00 ± 52%    +123.6%       1609 ± 19%  sched_debug.cpu#57.sched_count
    313.75 ± 53%    +132.7%     730.00 ± 20%  sched_debug.cpu#57.sched_goidle
    289.75 ± 80%    +463.1%       1631 ± 86%  sched_debug.cpu#57.ttwu_count
      4164 ± 83%     -80.4%     815.25 ± 49%  sched_debug.cpu#60.nr_switches
      2008 ± 86%     -82.0%     362.50 ± 49%  sched_debug.cpu#60.sched_goidle
    800.25 ± 44%     +59.8%       1279 ±  9%  sched_debug.cpu#61.nr_switches
    807.00 ± 43%     +59.1%       1284 ±  9%  sched_debug.cpu#61.sched_count
      1338 ±138%    +556.8%       8791 ± 31%  sched_debug.cpu#61.ttwu_count
    167.25 ± 60%    +112.6%     355.50 ±  5%  sched_debug.cpu#61.ttwu_local
     -0.50 ±-300%    -450.0%       1.75 ± 24%  sched_debug.cpu#63.nr_uninterruptible
    339.75 ±  8%     -12.3%     298.00 ±  8%  sched_debug.cpu#63.ttwu_local
      5420 ± 77%    +239.4%      18395 ± 39%  sched_debug.cpu#65.nr_switches
      6364 ± 57%    +193.7%      18690 ± 39%  sched_debug.cpu#65.sched_count
      2557 ± 83%    +256.1%       9106 ± 39%  sched_debug.cpu#65.sched_goidle
    978.50 ±  9%     +37.9%       1349 ± 19%  sched_debug.cpu#68.ttwu_count
    735.75 ± 50%    +117.2%       1597 ± 32%  sched_debug.cpu#73.nr_switches
    741.50 ± 50%    +116.1%       1602 ± 32%  sched_debug.cpu#73.sched_count
    300.50 ± 55%    +140.0%     721.25 ± 36%  sched_debug.cpu#73.sched_goidle
    214.75 ± 54%     +65.1%     354.50 ±  5%  sched_debug.cpu#73.ttwu_local
      9.75 ± 37%     -71.8%       2.75 ±136%  sched_debug.cpu#77.nr_uninterruptible
    960.50 ±117%    +508.8%       5848 ± 91%  sched_debug.cpu#77.ttwu_count
      1309 ± 14%     +54.1%       2018 ± 29%  sched_debug.cpu#8.ttwu_count
    388.00 ±  6%    +104.0%     791.50 ± 45%  sched_debug.cpu#8.ttwu_local
      2319 ± 56%    +210.3%       7198 ± 39%  sched_debug.cpu#9.nr_switches
      2514 ± 47%    +197.1%       7472 ± 33%  sched_debug.cpu#9.sched_count
      1064 ± 57%    +189.9%       3085 ± 32%  sched_debug.cpu#9.sched_goidle


lkp-wsx02: Westmere-EX
Memory: 128G




                             aim9.creat-clo.ops_per_sec

  600000 ++-------------------*---*---*-------------------------------------+
         *.*.*.*.*.*    *.*.*   *   *   *..*.*.*.*.*.*.*.*.*.*.  .*.*.*.*.*.*
  500000 ++        :    :       O O O O O      O   O O O O O O O.     O     |
         O O     O :    O O O O            O O                      O       |
         |   O O   O: O :                        O                O         |
  400000 ++         :  :                                                    |
         |          :  :                                                    |
  300000 ++         :  :                                                    |
         |           : :                                                    |
  200000 ++          : :                                                    |
         |           : :                                                    |
         |           ::                                                     |
  100000 ++           :                                                     |
         |            :                                                     |
       0 ++-----------*-----------------------------------------------------+

	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang

View attachment "job.yaml" of type "text/plain" (3153 bytes)

View attachment "reproduce" of type "text/plain" (5911 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ