lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <87y4dvg62w.fsf@yhuang-dev.intel.com>
Date:	Wed, 18 Nov 2015 14:44:07 +0800
From:	kernel test robot <ying.huang@...ux.intel.com>
TO:	Linus Torvalds <torvalds@...ux-foundation.org>
CC:	Eric Dumazet <edumazet@...gle.com>
Subject: [lkp] [vfs] f3f86e33dc: -5.3% will-it-scale.per_process_ops

FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit f3f86e33dc3da437fa4f204588ce7c78ea756982 ("vfs: Fix pathological performance case for __alloc_fd()")


=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  ivb42/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/dup1

commit: 
  8a28d67457b613258aa0578ccece206d166f2b9f
  f3f86e33dc3da437fa4f204588ce7c78ea756982

8a28d67457b61325 f3f86e33dc3da437fa4f204588 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
   5994379 ±  0%      -5.3%    5678711 ±  0%  will-it-scale.per_process_ops
   1440545 ±  0%      -5.1%    1367766 ±  2%  will-it-scale.per_thread_ops
      0.57 ±  0%      -5.9%       0.54 ±  0%  will-it-scale.scalability
      4.47 ±  2%      -3.1%       4.33 ±  1%  turbostat.RAMWatt
     59880 ±  5%     -13.1%      52055 ± 11%  cpuidle.C1-IVT.usage
    597.50 ±  4%     -19.7%     479.50 ± 16%  cpuidle.POLL.usage
  15756223 ±  0%    +367.7%   73688311 ± 84%  latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
  35858260 ±  0%    +113.3%   76474871 ± 77%  latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
      1560 ±171%    +300.1%       6241 ±  0%  numa-numastat.node0.other_node
      3101 ± 99%     -99.7%       9.50 ± 67%  numa-numastat.node1.other_node
      2980 ±  3%     -13.4%       2582 ±  1%  slabinfo.kmalloc-2048.active_objs
      3139 ±  3%     -12.5%       2746 ±  1%  slabinfo.kmalloc-2048.num_objs
      5018 ± 14%     -50.4%       2487 ± 13%  numa-vmstat.node0.nr_active_anon
      3121 ± 31%     -77.6%     700.00 ±132%  numa-vmstat.node0.nr_shmem
      3349 ± 20%     +76.6%       5916 ±  7%  numa-vmstat.node1.nr_active_anon
      1210 ± 80%    +200.8%       3640 ± 25%  numa-vmstat.node1.nr_shmem
     70442 ±  5%     -12.7%      61484 ±  0%  numa-meminfo.node0.Active
     20079 ± 14%     -50.4%       9954 ± 13%  numa-meminfo.node0.Active(anon)
     12487 ± 31%     -77.6%       2801 ±132%  numa-meminfo.node0.Shmem
     61970 ±  5%     +18.0%      73096 ±  1%  numa-meminfo.node1.Active
     13402 ± 20%     +76.6%      23671 ±  7%  numa-meminfo.node1.Active(anon)
      4843 ± 80%    +200.7%      14564 ± 25%  numa-meminfo.node1.Shmem
   1999660 ±  2%      -9.1%    1817792 ±  6%  sched_debug.cfs_rq[0]:/.min_vruntime
    814.25 ±  6%     -11.9%     717.00 ± 10%  sched_debug.cfs_rq[0]:/.util_avg
   -220009 ±-25%     -83.5%     -36294 ±-314%  sched_debug.cfs_rq[10]:/.spread0
   -220410 ±-25%     -82.7%     -38205 ±-290%  sched_debug.cfs_rq[11]:/.spread0
   -868154 ± -5%     -20.7%    -688065 ±-16%  sched_debug.cfs_rq[14]:/.spread0
     13.00 ±  0%    +105.8%      26.75 ± 61%  sched_debug.cfs_rq[15]:/.load
   -952278 ±-16%     -27.9%    -687025 ±-16%  sched_debug.cfs_rq[16]:/.spread0
   -876660 ± -6%     -21.8%    -685915 ±-15%  sched_debug.cfs_rq[17]:/.spread0
   -869841 ± -6%     -20.5%    -691511 ±-15%  sched_debug.cfs_rq[18]:/.spread0
   -872906 ± -6%     -21.0%    -689435 ±-15%  sched_debug.cfs_rq[19]:/.spread0
   -220042 ±-24%     -82.8%     -37798 ±-282%  sched_debug.cfs_rq[1]:/.spread0
   -870736 ± -6%     -20.9%    -689178 ±-16%  sched_debug.cfs_rq[20]:/.spread0
   -870782 ± -5%     -20.6%    -691440 ±-16%  sched_debug.cfs_rq[21]:/.spread0
     12.50 ± 12%     +20.0%      15.00 ±  8%  sched_debug.cfs_rq[23]:/.load_avg
   -947289 ±-16%     -27.8%    -684292 ±-16%  sched_debug.cfs_rq[23]:/.spread0
     12.50 ± 12%     +20.0%      15.00 ±  8%  sched_debug.cfs_rq[23]:/.tg_load_avg_contrib
    424.00 ± 13%     +27.4%     540.00 ±  9%  sched_debug.cfs_rq[23]:/.util_avg
   -180921 ±-30%    -100.2%     424.29 ±26645%  sched_debug.cfs_rq[25]:/.spread0
   -179335 ±-30%     -82.3%     -31706 ±-346%  sched_debug.cfs_rq[26]:/.spread0
   -180972 ±-30%    -100.1%     163.84 ±68609%  sched_debug.cfs_rq[27]:/.spread0
   -179636 ±-30%    -100.4%     736.15 ±15384%  sched_debug.cfs_rq[28]:/.spread0
   -180380 ±-30%    -101.1%       1963 ±5772%  sched_debug.cfs_rq[29]:/.spread0
     26.00 ±  3%     -18.3%      21.25 ± 22%  sched_debug.cfs_rq[2]:/.load
     29.50 ±  9%     -21.2%      23.25 ± 23%  sched_debug.cfs_rq[2]:/.runnable_load_avg
   -211354 ±-27%     -97.3%      -5780 ±-2383%  sched_debug.cfs_rq[2]:/.spread0
    762.25 ±  6%     -17.1%     632.00 ± 11%  sched_debug.cfs_rq[2]:/.util_avg
   -179346 ±-31%    -101.0%       1767 ±6351%  sched_debug.cfs_rq[30]:/.spread0
   -182129 ±-30%     -99.9%    -200.51 ±-56625%  sched_debug.cfs_rq[31]:/.spread0
   -178388 ±-30%     -99.9%    -162.33 ±-69718%  sched_debug.cfs_rq[32]:/.spread0
   -178678 ±-30%    -100.0%     -67.48 ±-166628%  sched_debug.cfs_rq[33]:/.spread0
   -177514 ±-30%    -100.1%     200.37 ±56326%  sched_debug.cfs_rq[34]:/.spread0
   -178339 ±-29%    -101.6%       2870 ±3873%  sched_debug.cfs_rq[35]:/.spread0
   -795803 ± -8%     -34.5%    -521305 ±-40%  sched_debug.cfs_rq[37]:/.spread0
   -783897 ± -6%     -22.6%    -607100 ±-18%  sched_debug.cfs_rq[38]:/.spread0
      3.00 ±  0%    +250.0%      10.50 ± 40%  sched_debug.cfs_rq[39]:/.load_avg
   -784040 ± -6%     -33.2%    -523669 ±-39%  sched_debug.cfs_rq[39]:/.spread0
      3.00 ±  0%    +250.0%      10.50 ± 40%  sched_debug.cfs_rq[39]:/.tg_load_avg_contrib
    173.75 ±  4%     +36.8%     237.75 ± 30%  sched_debug.cfs_rq[39]:/.util_avg
   -220092 ±-24%     -82.4%     -38783 ±-288%  sched_debug.cfs_rq[3]:/.spread0
   -783338 ± -6%     -22.8%    -604971 ±-18%  sched_debug.cfs_rq[41]:/.spread0
   -784423 ± -6%     -23.1%    -603402 ±-17%  sched_debug.cfs_rq[42]:/.spread0
   -785872 ± -6%     -23.0%    -605005 ±-18%  sched_debug.cfs_rq[43]:/.spread0
   -782962 ± -6%     -22.9%    -603838 ±-19%  sched_debug.cfs_rq[44]:/.spread0
   -783170 ± -6%     -23.2%    -601383 ±-18%  sched_debug.cfs_rq[45]:/.spread0
   -784950 ± -6%     -23.2%    -602937 ±-18%  sched_debug.cfs_rq[46]:/.spread0
     32.25 ± 35%     -24.8%      24.25 ±  1%  sched_debug.cfs_rq[4]:/.load
   -217411 ±-24%     -83.2%     -36433 ±-300%  sched_debug.cfs_rq[4]:/.spread0
   -219424 ±-25%     -83.0%     -37233 ±-299%  sched_debug.cfs_rq[5]:/.spread0
   -219112 ±-25%     -82.4%     -38536 ±-289%  sched_debug.cfs_rq[6]:/.spread0
   -218643 ±-24%     -82.8%     -37629 ±-298%  sched_debug.cfs_rq[7]:/.spread0
   -220909 ±-24%     -85.0%     -33175 ±-350%  sched_debug.cfs_rq[8]:/.spread0
   -220076 ±-25%     -85.9%     -31115 ±-337%  sched_debug.cfs_rq[9]:/.spread0
     89160 ±  6%      -8.7%      81395 ±  7%  sched_debug.cpu#0.nr_load_updates
     -2.75 ±-126%    -172.7%       2.00 ±136%  sched_debug.cpu#12.nr_uninterruptible
     16563 ± 21%     -39.6%      10009 ±  9%  sched_debug.cpu#13.nr_switches
     16901 ± 21%     -34.5%      11064 ±  9%  sched_debug.cpu#13.sched_count
      6432 ± 27%     -47.2%       3396 ± 38%  sched_debug.cpu#13.sched_goidle
      7244 ± 14%     -45.3%       3961 ± 39%  sched_debug.cpu#14.sched_goidle
     13.00 ±  0%    +105.8%      26.75 ± 61%  sched_debug.cpu#15.load
      1554 ± 21%     +62.9%       2531 ± 30%  sched_debug.cpu#16.ttwu_local
      6965 ± 21%     +48.0%      10308 ± 17%  sched_debug.cpu#18.sched_count
     28.25 ±  2%     -17.7%      23.25 ± 23%  sched_debug.cpu#2.cpu_load[4]
     26.00 ±  3%     -18.3%      21.25 ± 22%  sched_debug.cpu#2.load
      2703 ±  9%     +11.5%       3014 ±  8%  sched_debug.cpu#24.curr->pid
    420.00 ± 27%     -34.1%     276.75 ± 30%  sched_debug.cpu#25.sched_goidle
    247.00 ± 24%     +55.7%     384.50 ± 26%  sched_debug.cpu#27.sched_goidle
     -2.25 ±-79%    -211.1%       2.50 ± 82%  sched_debug.cpu#30.nr_uninterruptible
    715.75 ± 46%     -44.2%     399.50 ± 35%  sched_debug.cpu#32.ttwu_count
    133.50 ± 22%     +99.4%     266.25 ± 29%  sched_debug.cpu#33.ttwu_local
      1212 ± 47%     -46.7%     646.25 ± 25%  sched_debug.cpu#35.nr_switches
    506.50 ± 46%     -51.6%     245.00 ± 30%  sched_debug.cpu#35.sched_goidle
     32.25 ± 35%     -24.8%      24.25 ±  1%  sched_debug.cpu#4.load
      2973 ± 46%    +161.2%       7766 ± 47%  sched_debug.cpu#40.nr_switches
      3062 ± 46%    +156.1%       7843 ± 47%  sched_debug.cpu#40.sched_count
      1219 ± 55%    +155.9%       3121 ± 60%  sched_debug.cpu#40.sched_goidle
      1429 ±  2%     +49.2%       2131 ± 41%  sched_debug.cpu#44.curr->pid
      1.75 ± 93%     -57.1%       0.75 ±145%  sched_debug.cpu#45.nr_uninterruptible
    433.75 ± 32%     +75.1%     759.50 ± 14%  sched_debug.cpu#6.ttwu_count


ivb42: Ivytown Ivy Bridge-EP
Memory: 64G




                             will-it-scale.per_process_ops

  6.05e+06 ++---------------------------------------------------------------+
           *.*.**.*.*.**.*.*.         .*.   *.          .*   .*.**.         |
     6e+06 ++                **.*.*.**   *.*  *.*.**.*.*  *.*      *.*.**.*.*
  5.95e+06 ++                                                               |
           |                                                                |
   5.9e+06 ++                                                               |
  5.85e+06 ++                                                               |
           |                                                                |
   5.8e+06 ++                                                               |
  5.75e+06 ++                                                               |
           |               O OO O                                           |
   5.7e+06 O+  OO   O O  O           O   O OO O O                           |
  5.65e+06 ++O    O    O          O    O          OO O                      |
           |                        O                                       |
   5.6e+06 ++---------------------------------------------------------------+

	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang

View attachment "job.yaml" of type "text/plain" (3150 bytes)

View attachment "reproduce" of type "text/plain" (3583 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ