lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 01 Jun 2016 13:00:10 +0800
From:	"Huang\, Ying" <ying.huang@...el.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	"Huang\, Ying" <ying.huang@...el.com>,
	Ingo Molnar <mingo@...nel.org>, <lkp@...org>,
	Mike Galbraith <efault@....de>, <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [LKP] [lkp] [sched/fair] 53d3bc773e: hackbench.throughput -32.9% regression

Hi, Peter,

Peter Zijlstra <peterz@...radead.org> writes:

> On Tue, May 31, 2016 at 04:34:36PM +0800, Huang, Ying wrote:
>> Hi, Ingo,
>> 
>> Part of the regression has been recovered in v4.7-rc1 from -32.9% to
>> -9.8%.  But there is still some regression.  Is it possible for fully
>> restore it?
>
> after much searching on how you guys run hackbench... I figured
> something like:
>
>   perf bench sched messaging -g 20 --thread -l 60000

There is a reproduce file attached in the original report email, its
contents is something like below:

2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
2016-05-15 08:57:03 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 08:57:50 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 08:58:33 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 08:59:15 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 08:59:58 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:00:43 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:01:22 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:01:57 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:02:39 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:03:22 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:04:10 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:04:53 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:05:39 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:06:24 /usr/bin/hackbench -g 24 --threads -l 60000

Hope that will help you for reproduce.

> on my IVB-EP (2*10*2) is similar to your IVT thing.
>
> And running something like:
>
>   for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor ; do echo performance > $i ; done
>   perf stat --null --repeat 10 -- perf bench sched messaging -g 20 --thread -l 60000 | grep "seconds time elapsed"
>
> gets me:
>
> v4.6:
>
>       36.786914089 seconds time elapsed ( +-  0.49% )
>       37.054017355 seconds time elapsed ( +-  1.05% )
>
>
> origin/master (v4.7-rc1-ish):
>
>       34.757435264 seconds time elapsed ( +-  3.34% )
>       35.396252515 seconds time elapsed ( +-  3.38% )
>
>
> Which doesn't show a regression between v4.6 and HEAD; in fact it shows
> an improvement.

Yes.  For hackbench test, linus/master (v4.7-rc1+) is better than v4.6,
but it is worse than v4.6-rc7.  Details is as below.

=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/mode/ipc:
  ivb42/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/50%/threads/socket

commit: 
  v4.6-rc7
  v4.6
  367d3fd50566a313946fa9c5b2116a81bf3807e4

        v4.6-rc7                       v4.6 367d3fd50566a313946fa9c5b2 
---------------- -------------------------- -------------------------- 
         %stddev     %change         %stddev     %change         %stddev
             \          |                \          |                \  
    198307 ±  0%     -33.4%     132165 ±  3%     -11.8%     174857 ±  0%  hackbench.throughput
    625.91 ±  0%      -2.0%     613.12 ±  1%      -2.1%     612.85 ±  0%  hackbench.time.elapsed_time
    625.91 ±  0%      -2.0%     613.12 ±  1%      -2.1%     612.85 ±  0%  hackbench.time.elapsed_time.max
 1.611e+08 ±  0%    +254.7%  5.712e+08 ±  4%     -25.3%  1.203e+08 ±  5%  hackbench.time.involuntary_context_switches
    212287 ±  2%     +22.3%     259622 ±  4%     +33.0%     282261 ±  1%  hackbench.time.minor_page_faults
      4391 ±  0%      +5.7%       4643 ±  0%      -6.9%       4090 ±  0%  hackbench.time.percent_of_cpu_this_job_got
     26154 ±  0%      +5.2%      27509 ±  1%      -8.5%      23935 ±  0%  hackbench.time.system_time
      1336 ±  0%     -28.1%     961.07 ±  2%     -14.8%       1138 ±  0%  hackbench.time.user_time
 7.442e+08 ±  0%    +129.6%  1.709e+09 ±  4%     -17.5%  6.139e+08 ±  2%  hackbench.time.voluntary_context_switches
      4157 ±  1%     -39.0%       2536 ± 15%     +44.6%       6011 ±  2%  uptime.idle
   1656569 ±  0%    +131.8%    3840033 ±  3%     -10.2%    1486840 ±  2%  vmstat.system.cs
    225682 ±  0%    +335.2%     982245 ±  5%      -4.2%     216300 ±  7%  vmstat.system.in
   4416560 ±  3%      +7.3%    4737257 ±  2%     -18.1%    3617836 ±  1%  softirqs.RCU
   2591680 ±  0%     -37.9%    1608431 ±  7%     +47.9%    3833673 ±  0%  softirqs.SCHED
  13948275 ±  0%      +3.3%   14406201 ±  1%      -8.9%   12703887 ±  0%  softirqs.TIMER
 1.611e+08 ±  0%    +254.7%  5.712e+08 ±  4%     -25.3%  1.203e+08 ±  5%  time.involuntary_context_switches
    212287 ±  2%     +22.3%     259622 ±  4%     +33.0%     282261 ±  1%  time.minor_page_faults
      1336 ±  0%     -28.1%     961.07 ±  2%     -14.8%       1138 ±  0%  time.user_time
 7.442e+08 ±  0%    +129.6%  1.709e+09 ±  4%     -17.5%  6.139e+08 ±  2%  time.voluntary_context_switches
    176970 ±  1%      +2.4%     181276 ±  0%     -51.5%      85865 ±  0%  meminfo.Active
    101149 ±  2%      +0.4%     101589 ±  1%     -85.4%      14807 ±  0%  meminfo.Active(file)
    390916 ±  0%      +1.1%     395022 ±  0%     +23.2%     481664 ±  0%  meminfo.Inactive
    381267 ±  0%      +1.1%     385296 ±  0%     +23.8%     472035 ±  0%  meminfo.Inactive(file)
    143716 ±  0%     -12.4%     125923 ±  1%      -2.4%     140230 ±  0%  meminfo.SUnreclaim
    194906 ±  0%      -8.9%     177650 ±  1%      -1.8%     191478 ±  0%  meminfo.Slab
   1162151 ±  6%     +11.4%    1294775 ±  2%     +17.5%    1365360 ±  1%  numa-numastat.node0.local_node
   1163400 ±  6%     +11.5%    1297646 ±  2%     +17.4%    1365361 ±  1%  numa-numastat.node0.numa_hit
      1249 ±197%    +129.8%       2871 ± 95%     -99.9%       0.67 ± 70%  numa-numastat.node0.other_node
   1084104 ±  6%     +15.1%    1247352 ±  4%     +22.0%    1323149 ±  1%  numa-numastat.node1.local_node
   1089973 ±  6%     +14.9%    1252683 ±  4%     +21.4%    1323149 ±  1%  numa-numastat.node1.numa_hit
      5868 ± 40%      -9.2%       5330 ± 70%    -100.0%       0.33 ±141%  numa-numastat.node1.other_node
     92.11 ±  0%      +5.5%      97.16 ±  0%      -6.3%      86.33 ±  0%  turbostat.%Busy
      2756 ±  0%      +5.5%       2907 ±  0%      -6.3%       2584 ±  0%  turbostat.Avg_MHz
      7.70 ±  0%     -65.6%       2.64 ± 12%     +74.9%      13.46 ±  2%  turbostat.CPU%c1
    180.27 ±  0%      -1.6%     177.34 ±  0%      -2.5%     175.80 ±  0%  turbostat.CorWatt
    210.07 ±  0%      -1.1%     207.71 ±  0%      -1.9%     206.01 ±  0%  turbostat.PkgWatt
      5.81 ±  0%     +35.8%       7.88 ±  3%     +24.2%       7.21 ±  2%  turbostat.RAMWatt
    102504 ± 20%      -5.6%      96726 ± 25%     -65.7%      35129 ± 52%  numa-meminfo.node0.Active
     50026 ±  2%      +2.3%      51197 ±  4%     -85.2%       7408 ±  0%  numa-meminfo.node0.Active(file)
    198553 ±  2%      +0.4%     199265 ±  3%     +22.0%     242211 ±  1%  numa-meminfo.node0.Inactive
    191148 ±  1%      +1.7%     194350 ±  3%     +23.5%     235978 ±  0%  numa-meminfo.node0.Inactive(file)
     74572 ±  8%     -11.8%      65807 ±  5%      -4.4%      71257 ±  3%  numa-meminfo.node0.SUnreclaim
     51121 ±  5%      -1.4%      50391 ±  4%     -85.5%       7398 ±  0%  numa-meminfo.node1.Active(file)
    192353 ±  1%      +1.8%     195730 ±  2%     +24.5%     239430 ±  1%  numa-meminfo.node1.Inactive
    190119 ±  0%      +0.4%     190946 ±  1%     +24.2%     236055 ±  0%  numa-meminfo.node1.Inactive(file)
    472112 ±  5%      +3.0%     486190 ±  5%      +8.2%     510902 ±  4%  numa-meminfo.node1.MemUsed
     12506 ±  2%      +2.3%      12799 ±  4%     -85.2%       1852 ±  0%  numa-vmstat.node0.nr_active_file
     47786 ±  1%      +1.7%      48587 ±  3%     +23.5%      58994 ±  0%  numa-vmstat.node0.nr_inactive_file
     18626 ±  8%     -11.7%      16446 ±  5%      -4.4%      17801 ±  3%  numa-vmstat.node0.nr_slab_unreclaimable
     66037 ±  3%      +3.1%      68095 ±  4%    -100.0%       0.00 ±  0%  numa-vmstat.node0.numa_other
     12780 ±  5%      -1.4%      12597 ±  4%     -85.5%       1849 ±  0%  numa-vmstat.node1.nr_active_file
     47529 ±  0%      +0.4%      47735 ±  1%     +24.2%      59013 ±  0%  numa-vmstat.node1.nr_inactive_file
    698206 ±  5%     +11.3%     777438 ±  4%     +17.6%     820805 ±  2%  numa-vmstat.node1.numa_hit
    674672 ±  6%     +12.0%     755944 ±  4%     +21.7%     820805 ±  2%  numa-vmstat.node1.numa_local
     23532 ± 10%      -8.7%      21493 ± 15%    -100.0%       0.00 ±  0%  numa-vmstat.node1.numa_other
 1.766e+09 ±  0%     -60.1%  7.057e+08 ± 11%     +70.1%  3.004e+09 ±  1%  cpuidle.C1-IVT.time
 1.125e+08 ±  0%     -41.9%   65415380 ± 10%     +38.6%  1.559e+08 ±  0%  cpuidle.C1-IVT.usage
  28400387 ±  1%     -86.0%    3980259 ± 24%     +21.9%   34611888 ±  3%  cpuidle.C1E-IVT.time
    308989 ±  0%     -84.5%      47825 ± 23%     +10.1%     340115 ±  3%  cpuidle.C1E-IVT.usage
  58891432 ±  0%     -88.2%    6936400 ± 20%     +36.2%   80209704 ±  4%  cpuidle.C3-IVT.time
    521047 ±  0%     -86.5%      70085 ± 22%     +16.6%     607626 ±  3%  cpuidle.C3-IVT.usage
 5.375e+08 ±  0%     -75.8%  1.298e+08 ± 11%     +55.6%  8.366e+08 ±  2%  cpuidle.C6-IVT.time
   4062211 ±  0%     -85.1%     603908 ± 22%     +28.0%    5200129 ±  2%  cpuidle.C6-IVT.usage
     15694 ±  6%    +386.2%      76300 ±145%    +774.3%     137212 ± 62%  cpuidle.POLL.time
      2751 ±  3%     -52.5%       1308 ± 18%     +15.4%       3176 ±  2%  cpuidle.POLL.usage
     25287 ±  2%      +0.4%      25397 ±  1%     -85.4%       3701 ±  0%  proc-vmstat.nr_active_file
     95316 ±  0%      +1.1%      96323 ±  0%     +23.8%     118008 ±  0%  proc-vmstat.nr_inactive_file
     35930 ±  0%     -12.3%      31511 ±  1%      -2.5%      35048 ±  0%  proc-vmstat.nr_slab_unreclaimable
    154964 ±  3%     +40.6%     217915 ±  5%     +48.6%     230354 ±  2%  proc-vmstat.numa_hint_faults
    128683 ±  4%     +46.4%     188443 ±  5%     +45.5%     187179 ±  2%  proc-vmstat.numa_hint_faults_local
   2247802 ±  0%     +13.2%    2544572 ±  2%     +19.5%    2685990 ±  0%  proc-vmstat.numa_hit
   2241597 ±  0%     +13.2%    2537511 ±  2%     +19.8%    2685989 ±  0%  proc-vmstat.numa_local
      6205 ±  0%     +13.8%       7060 ± 18%    -100.0%       1.00 ±  0%  proc-vmstat.numa_other
     23151 ±  1%     -25.8%      17189 ±  4%      -1.7%      22762 ±  0%  proc-vmstat.numa_pages_migrated
    155763 ±  3%     +43.4%     223408 ±  5%     +49.7%     233247 ±  2%  proc-vmstat.numa_pte_updates
     14010 ±  1%     +16.3%      16287 ±  7%     -17.1%      11610 ±  0%  proc-vmstat.pgactivate
    373910 ±  2%     +28.4%     479928 ±  4%     +30.1%     486506 ±  1%  proc-vmstat.pgalloc_dma32
   7157922 ±  1%     +30.9%    9370533 ±  2%     +38.0%    9878095 ±  0%  proc-vmstat.pgalloc_normal
   7509133 ±  1%     +30.9%    9827974 ±  2%     +37.8%   10345598 ±  0%  proc-vmstat.pgfree
     23151 ±  1%     -25.8%      17189 ±  4%      -1.7%      22762 ±  0%  proc-vmstat.pgmigrate_success
    737.40 ±  4%     -10.3%     661.25 ±  3%     -30.8%     510.00 ±  0%  slabinfo.RAW.active_objs
    737.40 ±  4%     -10.3%     661.25 ±  3%     -30.8%     510.00 ±  0%  slabinfo.RAW.num_objs
      5762 ±  6%     -19.2%       4653 ±  3%    -100.0%       0.00 ± -1%  slabinfo.UNIX.active_objs
    172.60 ±  6%     -18.9%     140.00 ±  3%    -100.0%       0.00 ± -1%  slabinfo.UNIX.active_slabs
      5892 ±  6%     -19.0%       4775 ±  3%    -100.0%       0.00 ± -1%  slabinfo.UNIX.num_objs
    172.60 ±  6%     -18.9%     140.00 ±  3%    -100.0%       0.00 ± -1%  slabinfo.UNIX.num_slabs
     37256 ±  3%      -8.7%      34010 ±  3%      +1.6%      37863 ±  0%  slabinfo.anon_vma_chain.active_objs
     37401 ±  3%      -8.8%      34094 ±  3%      +1.5%      37948 ±  0%  slabinfo.anon_vma_chain.num_objs
      4509 ±  1%     +13.8%       5130 ±  9%      +8.3%       4885 ± 15%  slabinfo.cred_jar.active_objs
      4509 ±  1%     +13.8%       5130 ±  9%      +8.3%       4885 ± 15%  slabinfo.cred_jar.num_objs
      2783 ±  2%      +3.4%       2877 ±  4%     +54.3%       4295 ±  0%  slabinfo.kmalloc-1024.active_objs
      2858 ±  1%      +2.6%       2932 ±  3%     +53.4%       4385 ±  0%  slabinfo.kmalloc-1024.num_objs
     25441 ±  1%     -10.1%      22884 ±  1%      -3.8%      24477 ±  2%  slabinfo.kmalloc-16.active_objs
     25441 ±  1%     -10.1%      22884 ±  1%      -3.8%      24477 ±  2%  slabinfo.kmalloc-16.num_objs
     43013 ±  0%     -41.4%      25205 ±  5%      +3.1%      44366 ±  1%  slabinfo.kmalloc-256.active_objs
    854.60 ±  0%     -42.0%     495.25 ±  5%      -1.0%     846.00 ±  0%  slabinfo.kmalloc-256.active_slabs
     54719 ±  0%     -42.0%      31735 ±  5%      -1.0%      54189 ±  0%  slabinfo.kmalloc-256.num_objs
    854.60 ±  0%     -42.0%     495.25 ±  5%      -1.0%     846.00 ±  0%  slabinfo.kmalloc-256.num_slabs
     47683 ±  0%     -37.7%      29715 ±  4%      +2.9%      49067 ±  0%  slabinfo.kmalloc-512.active_objs
    924.00 ±  0%     -39.0%     563.75 ±  4%      -0.9%     916.00 ±  0%  slabinfo.kmalloc-512.active_slabs
     59169 ±  0%     -39.0%      36109 ±  4%      -0.8%      58667 ±  0%  slabinfo.kmalloc-512.num_objs
    924.00 ±  0%     -39.0%     563.75 ±  4%      -0.9%     916.00 ±  0%  slabinfo.kmalloc-512.num_slabs
      8287 ±  2%      +2.8%       8521 ±  4%     +12.6%       9335 ±  2%  slabinfo.kmalloc-96.active_objs
      8351 ±  3%      +2.6%       8570 ±  4%     +12.7%       9409 ±  2%  slabinfo.kmalloc-96.num_objs
     12776 ±  1%     -22.2%       9944 ±  2%      -6.8%      11906 ±  1%  slabinfo.pid.active_objs
     12776 ±  1%     -22.2%       9944 ±  2%      -6.8%      11906 ±  1%  slabinfo.pid.num_objs
      5708 ±  2%     -10.0%       5139 ±  3%      -6.2%       5355 ±  0%  slabinfo.sock_inode_cache.active_objs
      5902 ±  2%      -9.8%       5326 ±  3%      -5.9%       5552 ±  0%  slabinfo.sock_inode_cache.num_objs
    447.40 ±  6%     -35.7%     287.50 ±  6%      -7.7%     413.00 ±  4%  slabinfo.taskstats.active_objs
    447.40 ±  6%     -35.7%     287.50 ±  6%      -7.7%     413.00 ±  4%  slabinfo.taskstats.num_objs
    304731 ± 27%     -45.5%     166107 ± 76%     -98.3%       5031 ± 23%  sched_debug.cfs_rq:/.MIN_vruntime.avg
  12211047 ± 35%     -38.5%    7509311 ± 78%     -99.0%     118856 ± 40%  sched_debug.cfs_rq:/.MIN_vruntime.max
   1877477 ± 30%     -41.5%    1098508 ± 77%     -98.8%      21976 ± 14%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
     18.91 ±  7%      -3.9%      18.16 ±  8%  +3.8e+06%     715502 ±  2%  sched_debug.cfs_rq:/.load.avg
     95.71 ± 45%      -7.8%      88.20 ± 74%  +1.1e+06%    1067373 ±  4%  sched_debug.cfs_rq:/.load.max
     19.94 ± 31%      -9.6%      18.02 ± 52%  +1.7e+06%     335607 ±  2%  sched_debug.cfs_rq:/.load.stddev
     21.16 ±  9%     +12.3%      23.76 ±  9%   +2890.4%     632.65 ±  3%  sched_debug.cfs_rq:/.load_avg.avg
    125.40 ± 49%      +4.4%     130.90 ± 13%    +643.9%     932.88 ±  5%  sched_debug.cfs_rq:/.load_avg.max
      8.29 ±  2%      -3.5%       8.00 ±  6%   +2852.1%     244.76 ±  6%  sched_debug.cfs_rq:/.load_avg.min
     20.18 ± 45%     +13.1%      22.83 ± 18%    +720.8%     165.65 ±  3%  sched_debug.cfs_rq:/.load_avg.stddev
    304731 ± 27%     -45.5%     166107 ± 76%     -98.3%       5031 ± 23%  sched_debug.cfs_rq:/.max_vruntime.avg
  12211047 ± 35%     -38.5%    7509311 ± 78%     -99.0%     118856 ± 40%  sched_debug.cfs_rq:/.max_vruntime.max
   1877477 ± 30%     -41.5%    1098508 ± 77%     -98.8%      21976 ± 14%  sched_debug.cfs_rq:/.max_vruntime.stddev
  29445770 ±  0%      -4.3%   28190370 ±  2%     -99.0%     299502 ±  0%  sched_debug.cfs_rq:/.min_vruntime.avg
  31331918 ±  0%      -6.2%   29380072 ±  2%     -99.0%     322082 ±  0%  sched_debug.cfs_rq:/.min_vruntime.max
  27785446 ±  0%      -2.5%   27098935 ±  2%     -99.0%     282267 ±  0%  sched_debug.cfs_rq:/.min_vruntime.min
    916182 ± 13%     -35.6%     590123 ± 13%     -98.6%      12421 ±  4%  sched_debug.cfs_rq:/.min_vruntime.stddev
      0.26 ±  6%     -34.5%       0.17 ± 14%     +34.0%       0.34 ±  3%  sched_debug.cfs_rq:/.nr_running.stddev
     16.42 ±  3%      +6.9%      17.56 ±  3%   +3319.3%     561.57 ±  4%  sched_debug.cfs_rq:/.runnable_load_avg.avg
     38.22 ± 28%     +10.9%      42.38 ± 49%   +2280.8%     909.91 ±  1%  sched_debug.cfs_rq:/.runnable_load_avg.max
      0.05 ±133%   +4879.2%       2.72 ± 46%   +4177.8%       2.33 ± 32%  sched_debug.cfs_rq:/.runnable_load_avg.min
      7.59 ± 17%      +4.0%       7.90 ± 36%   +3375.3%     263.95 ±  1%  sched_debug.cfs_rq:/.runnable_load_avg.stddev
   -897515 ±-52%    -132.1%     288533 ±159%     -97.6%     -21836 ± -6%  sched_debug.cfs_rq:/.spread0.avg
    989517 ± 31%     +49.2%    1476487 ± 23%     -99.9%     748.12 ±129%  sched_debug.cfs_rq:/.spread0.max
  -2558887 ±-23%     -68.7%    -801084 ±-66%     -98.5%     -39072 ± -7%  sched_debug.cfs_rq:/.spread0.min
    916967 ± 13%     -35.7%     589208 ± 13%     -98.6%      12424 ±  4%  sched_debug.cfs_rq:/.spread0.stddev
    744.20 ±  0%     +10.9%     825.23 ±  3%     -38.6%     457.27 ±  3%  sched_debug.cfs_rq:/.util_avg.min
     58.07 ±  9%     -28.4%      41.55 ± 19%    +119.1%     127.19 ±  2%  sched_debug.cfs_rq:/.util_avg.stddev
    157158 ±  3%     -35.8%     100942 ±  9%    +135.5%     370117 ±  7%  sched_debug.cpu.avg_idle.avg
    600573 ±  2%     -42.9%     342823 ± 21%     +29.4%     777397 ±  2%  sched_debug.cpu.avg_idle.max
    133080 ±  6%     -48.5%      68563 ± 21%     +87.9%     250058 ±  0%  sched_debug.cpu.avg_idle.stddev
     11.80 ± 22%     +96.1%      23.13 ± 24%     -60.4%       4.67 ±  2%  sched_debug.cpu.clock.stddev
     11.80 ± 22%     +96.1%      23.13 ± 24%     -60.4%       4.67 ±  2%  sched_debug.cpu.clock_task.stddev
     16.49 ±  3%     +10.3%      18.19 ±  7%   +2983.8%     508.41 ±  6%  sched_debug.cpu.cpu_load[0].avg
     38.35 ± 28%     +69.4%      64.95 ± 61%   +2275.1%     910.76 ±  1%  sched_debug.cpu.cpu_load[0].max
      7.67 ± 18%     +53.8%      11.79 ± 53%   +3832.5%     301.43 ±  5%  sched_debug.cpu.cpu_load[0].stddev
     16.39 ±  2%      +9.9%      18.01 ±  5%   +3723.3%     626.64 ±  3%  sched_debug.cpu.cpu_load[1].avg
     37.87 ± 27%     +51.3%      57.30 ± 47%   +2294.6%     906.91 ±  1%  sched_debug.cpu.cpu_load[1].max
      3.91 ± 17%     +48.0%       5.78 ± 15%   +4683.7%     187.00 ±  5%  sched_debug.cpu.cpu_load[1].min
      6.84 ± 20%     +45.5%       9.95 ± 41%   +2455.5%     174.75 ±  2%  sched_debug.cpu.cpu_load[1].stddev
     16.57 ±  2%      +8.4%      17.96 ±  4%   +3666.4%     624.20 ±  3%  sched_debug.cpu.cpu_load[2].avg
     37.71 ± 24%     +38.8%      52.35 ± 36%   +2301.1%     905.42 ±  1%  sched_debug.cpu.cpu_load[2].max
      6.02 ±  6%     +18.8%       7.15 ±  8%   +3322.5%     205.97 ±  2%  sched_debug.cpu.cpu_load[2].min
      6.50 ± 19%     +36.7%       8.89 ± 31%   +2513.9%     169.85 ±  1%  sched_debug.cpu.cpu_load[2].stddev
     16.99 ±  1%      +6.4%      18.07 ±  3%   +3565.7%     622.77 ±  3%  sched_debug.cpu.cpu_load[3].avg
     36.87 ± 19%     +33.9%      49.39 ± 28%   +2345.3%     901.64 ±  1%  sched_debug.cpu.cpu_load[3].max
      7.33 ±  3%      +5.3%       7.72 ±  8%   +2833.8%     214.97 ±  4%  sched_debug.cpu.cpu_load[3].min
      6.11 ± 15%     +34.9%       8.24 ± 23%   +2636.3%     167.13 ±  1%  sched_debug.cpu.cpu_load[3].stddev
     17.32 ±  1%      +4.8%      18.15 ±  2%   +3491.8%     622.26 ±  3%  sched_debug.cpu.cpu_load[4].avg
     35.56 ± 12%     +32.8%      47.23 ± 22%   +2414.9%     894.39 ±  1%  sched_debug.cpu.cpu_load[4].max
      8.00 ±  5%      -2.0%       7.84 ±  8%   +2683.7%     222.70 ±  6%  sched_debug.cpu.cpu_load[4].min
      5.80 ±  9%     +35.4%       7.85 ± 18%   +2705.5%     162.77 ±  0%  sched_debug.cpu.cpu_load[4].stddev
     16851 ±  1%     -16.8%      14014 ±  3%     -15.6%      14218 ±  3%  sched_debug.cpu.curr->pid.avg
     19325 ±  0%     -19.1%      15644 ±  2%      -6.4%      18083 ±  0%  sched_debug.cpu.curr->pid.max
      5114 ±  8%     -48.9%       2611 ± 16%     +20.8%       6179 ±  4%  sched_debug.cpu.curr->pid.stddev
     18.95 ±  7%      -2.9%      18.40 ± 11%  +3.7e+06%     708609 ±  3%  sched_debug.cpu.load.avg
     95.67 ± 46%      -7.6%      88.42 ± 74%  +1.1e+06%    1067053 ±  4%  sched_debug.cpu.load.max
     19.89 ± 31%      -5.7%      18.76 ± 54%  +1.7e+06%     338423 ±  3%  sched_debug.cpu.load.stddev
    500000 ±  0%      +0.0%     500000 ±  0%     +14.4%     572147 ±  4%  sched_debug.cpu.max_idle_balance_cost.max
      0.00 ±  4%     +10.8%       0.00 ±  9%     +35.0%       0.00 ± 12%  sched_debug.cpu.next_balance.stddev
      1417 ±  3%      -6.8%       1322 ± 11%     +31.2%       1860 ±  8%  sched_debug.cpu.nr_load_updates.stddev
      9.75 ±  5%     -13.6%       8.43 ±  4%      -7.0%       9.07 ± 11%  sched_debug.cpu.nr_running.avg
     29.22 ±  2%     -16.8%      24.30 ±  7%     +25.6%      36.70 ±  5%  sched_debug.cpu.nr_running.max
      7.47 ±  4%     -20.3%       5.95 ±  5%     +39.8%      10.44 ±  7%  sched_debug.cpu.nr_running.stddev
  10261512 ±  0%    +132.6%   23873264 ±  3%     -10.3%    9200003 ±  2%  sched_debug.cpu.nr_switches.avg
  11634045 ±  1%    +126.0%   26295756 ±  2%      -3.0%   11281317 ±  1%  sched_debug.cpu.nr_switches.max
   8958320 ±  1%    +141.1%   21601624 ±  4%     -15.9%    7538372 ±  3%  sched_debug.cpu.nr_switches.min
    780364 ±  7%     +61.6%    1261065 ±  4%     +65.0%    1287398 ±  2%  sched_debug.cpu.nr_switches.stddev
      8.65 ± 13%     +23.4%      10.68 ± 13%    +170.0%      23.36 ± 13%  sched_debug.cpu.nr_uninterruptible.max
    -13.62 ±-17%     +45.2%     -19.77 ±-23%    +112.3%     -28.91 ±-27%  sched_debug.cpu.nr_uninterruptible.min
      4.45 ±  8%     +29.8%       5.78 ± 10%    +106.3%       9.18 ± 24%  sched_debug.cpu.nr_uninterruptible.stddev

Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ