lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87eg8id3s3.fsf@yhuang-dev.intel.com>
Date:	Tue, 31 May 2016 16:34:36 +0800
From:	"Huang\, Ying" <ying.huang@...el.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Peter Zijlstra <peterz@...radead.org>, <lkp@...org>,
	Mike Galbraith <efault@....de>, <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"Linus Torvalds" <torvalds@...ux-foundation.org>
Subject: Re: [LKP] [lkp] [sched/fair] 53d3bc773e: hackbench.throughput -32.9% regression

Hi, Ingo,

Part of the regression has been recovered in v4.7-rc1 from -32.9% to
-9.8%.  But there is still some regression.  Is it possible for fully
restore it?

Details are as below.

Best Regards,
Huang, Ying


=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
  gcc-4.9/performance/socket/x86_64-rhel/threads/50%/debian-x86_64-2015-02-07.cgz/ivb42/hackbench

commit: 
  c5114626f33b62fa7595e57d87f33d9d1f8298a2
  53d3bc773eaa7ab1cf63585e76af7ee869d5e709
  v4.7-rc1

c5114626f33b62fa 53d3bc773eaa7ab1cf63585e76                   v4.7-rc1 
---------------- -------------------------- -------------------------- 
         %stddev     %change         %stddev     %change         %stddev
             \          |                \          |                \  
    196590 ±  0%     -32.9%     131963 ±  2%      -9.8%     177231 ±  0%  hackbench.throughput
    602.66 ±  0%      +2.8%     619.27 ±  2%      +0.3%     604.66 ±  0%  hackbench.time.elapsed_time
    602.66 ±  0%      +2.8%     619.27 ±  2%      +0.3%     604.66 ±  0%  hackbench.time.elapsed_time.max
  1.76e+08 ±  3%    +236.0%  5.914e+08 ±  2%     -49.6%   88783232 ±  5%  hackbench.time.involuntary_context_switches
    208664 ±  2%     +26.0%     262929 ±  3%     +15.7%     241377 ±  0%  hackbench.time.minor_page_faults
      4401 ±  0%      +5.7%       4650 ±  0%      -8.1%       4043 ±  0%  hackbench.time.percent_of_cpu_this_job_got
     25256 ±  0%     +10.2%      27842 ±  2%      -7.7%      23311 ±  0%  hackbench.time.system_time
      1272 ±  0%     -24.5%     961.37 ±  2%     -10.4%       1140 ±  0%  hackbench.time.user_time
  7.64e+08 ±  1%    +131.8%  1.771e+09 ±  2%     -30.1%  5.339e+08 ±  2%  hackbench.time.voluntary_context_switches
      4051 ±  0%     -39.9%       2434 ±  3%     +57.8%       6393 ±  0%  uptime.idle
   4337715 ±  1%      +7.3%    4654464 ±  2%     -23.3%    3325346 ±  5%  softirqs.RCU
   2462880 ±  0%     -35.6%    1585869 ±  5%     +58.1%    3893988 ±  0%  softirqs.SCHED
   1766752 ±  1%    +122.6%    3932589 ±  1%     -25.6%    1313619 ±  1%  vmstat.system.cs
    249718 ±  2%    +307.4%    1017398 ±  3%     -40.4%     148723 ±  5%  vmstat.system.in
  1.76e+08 ±  3%    +236.0%  5.914e+08 ±  2%     -49.6%   88783232 ±  5%  time.involuntary_context_switches
    208664 ±  2%     +26.0%     262929 ±  3%     +15.7%     241377 ±  0%  time.minor_page_faults
      1272 ±  0%     -24.5%     961.37 ±  2%     -10.4%       1140 ±  0%  time.user_time
  7.64e+08 ±  1%    +131.8%  1.771e+09 ±  2%     -30.1%  5.339e+08 ±  2%  time.voluntary_context_switches
    177383 ±  0%      +2.0%     180939 ±  0%     -51.3%      86390 ±  1%  meminfo.Active
    102033 ±  0%      -0.1%     101893 ±  1%     -85.6%      14740 ±  0%  meminfo.Active(file)
    392558 ±  0%      +0.0%     392612 ±  0%     +22.6%     481411 ±  0%  meminfo.Inactive
    382911 ±  0%      +0.0%     382923 ±  0%     +23.2%     471792 ±  0%  meminfo.Inactive(file)
    143370 ±  0%     -12.0%     126124 ±  1%      -1.5%     141210 ±  0%  meminfo.SUnreclaim
   1136461 ±  3%     +16.6%    1324662 ±  5%     +15.9%    1316829 ±  1%  numa-numastat.node0.local_node
   1140216 ±  3%     +16.2%    1324689 ±  5%     +15.5%    1316830 ±  1%  numa-numastat.node0.numa_hit
      3755 ± 68%     -99.3%      27.25 ± 94%    -100.0%       1.25 ± 34%  numa-numastat.node0.other_node
   1098889 ±  4%     +20.1%    1320211 ±  6%     +16.4%    1278783 ±  1%  numa-numastat.node1.local_node
   1101996 ±  4%     +20.5%    1327590 ±  6%     +16.0%    1278783 ±  1%  numa-numastat.node1.numa_hit
      3106 ± 99%    +137.5%       7379 ± 17%    -100.0%       0.00 ± -1%  numa-numastat.node1.other_node
      7.18 ±  0%     -50.2%       3.57 ± 43%     +76.1%      12.64 ±  1%  perf-profile.cycles-pp.call_cpuidle
      8.09 ±  0%     -44.7%       4.47 ± 38%     +72.4%      13.95 ±  1%  perf-profile.cycles-pp.cpu_startup_entry
      7.17 ±  0%     -50.3%       3.56 ± 43%     +76.2%      12.63 ±  1%  perf-profile.cycles-pp.cpuidle_enter
      7.14 ±  0%     -50.3%       3.55 ± 43%     +76.1%      12.58 ±  1%  perf-profile.cycles-pp.cpuidle_enter_state
      7.11 ±  0%     -50.6%       3.52 ± 43%     +76.3%      12.54 ±  1%  perf-profile.cycles-pp.intel_idle
      8.00 ±  0%     -44.5%       4.44 ± 38%     +72.1%      13.77 ±  1%  perf-profile.cycles-pp.start_secondary
     92.32 ±  0%      +5.4%      97.32 ±  0%      -7.7%      85.26 ±  0%  turbostat.%Busy
      2763 ±  0%      +5.4%       2912 ±  0%      -7.7%       2551 ±  0%  turbostat.Avg_MHz
      7.48 ±  0%     -66.5%       2.50 ±  7%     +94.5%      14.54 ±  0%  turbostat.CPU%c1
      0.20 ±  2%      -6.4%       0.18 ±  2%      +2.6%       0.20 ±  3%  turbostat.CPU%c6
    180.03 ±  0%      -1.3%     177.62 ±  0%      -2.4%     175.63 ±  0%  turbostat.CorWatt
    209.86 ±  0%      -0.8%     208.08 ±  0%      -2.0%     205.64 ±  0%  turbostat.PkgWatt
      5.83 ±  0%     +38.9%       8.10 ±  3%     +12.7%       6.57 ±  1%  turbostat.RAMWatt
 1.658e+09 ±  0%     -59.1%  6.784e+08 ±  7%     +89.3%  3.138e+09 ±  0%  cpuidle.C1-IVT.time
 1.066e+08 ±  0%     -40.3%   63661563 ±  6%     +44.3%  1.539e+08 ±  0%  cpuidle.C1-IVT.usage
  26348635 ±  0%     -86.8%    3471048 ± 15%     +50.0%   39513523 ±  0%  cpuidle.C1E-IVT.time
    291620 ±  0%     -85.1%      43352 ± 15%     +28.8%     375730 ±  1%  cpuidle.C1E-IVT.usage
  54158643 ±  1%     -88.5%    6254009 ± 14%     +78.4%   96596486 ±  1%  cpuidle.C3-IVT.time
    482437 ±  1%     -87.0%      62620 ± 16%     +45.6%     702258 ±  1%  cpuidle.C3-IVT.usage
 5.028e+08 ±  0%     -75.8%  1.219e+08 ±  8%     +85.5%  9.327e+08 ±  1%  cpuidle.C6-IVT.time
   3805026 ±  0%     -85.5%     552326 ± 16%     +49.4%    5684182 ±  1%  cpuidle.C6-IVT.usage
      2766 ±  4%     -51.4%       1344 ±  6%     +10.0%       3042 ±  7%  cpuidle.POLL.usage
     49725 ±  4%      +2.1%      50775 ±  3%     -85.2%       7360 ±  0%  numa-meminfo.node0.Active(file)
      2228 ± 92%    +137.1%       5285 ± 15%    +118.7%       4874 ± 19%  numa-meminfo.node0.AnonHugePages
    197699 ±  2%      +1.6%     200772 ±  0%     +23.9%     245042 ±  0%  numa-meminfo.node0.Inactive
    192790 ±  1%      -0.6%     191611 ±  0%     +22.3%     235849 ±  0%  numa-meminfo.node0.Inactive(file)
     73589 ±  4%     -12.5%      64393 ±  2%      -1.3%      72664 ±  2%  numa-meminfo.node0.SUnreclaim
     27438 ± 83%    +102.6%      55585 ±  6%     +83.0%      50223 ±  0%  numa-meminfo.node0.Shmem
    101051 ±  3%     -10.9%      90044 ±  2%      -1.2%      99863 ±  2%  numa-meminfo.node0.Slab
     89204 ± 25%     -25.3%      66594 ±  4%     -77.6%      19954 ±  4%  numa-meminfo.node1.Active
     52306 ±  3%      -2.3%      51117 ±  4%     -85.9%       7380 ±  0%  numa-meminfo.node1.Active(file)
    194864 ±  2%      -1.6%     191824 ±  1%     +21.3%     236372 ±  0%  numa-meminfo.node1.Inactive
      4742 ± 86%     -89.2%     511.75 ± 41%     -90.9%     430.00 ± 60%  numa-meminfo.node1.Inactive(anon)
    190121 ±  1%      +0.6%     191311 ±  1%     +24.1%     235942 ±  0%  numa-meminfo.node1.Inactive(file)
     69844 ±  4%     -11.8%      61579 ±  3%      -1.9%      68521 ±  3%  numa-meminfo.node1.SUnreclaim
     12430 ±  4%      +2.1%      12693 ±  3%     -85.2%       1839 ±  0%  numa-vmstat.node0.nr_active_file
     48197 ±  1%      -0.6%      47902 ±  0%     +22.3%      58962 ±  0%  numa-vmstat.node0.nr_inactive_file
      6857 ± 83%    +102.8%      13905 ±  6%     +83.1%      12559 ±  0%  numa-vmstat.node0.nr_shmem
     18395 ±  4%     -12.4%      16121 ±  2%      -1.1%      18187 ±  2%  numa-vmstat.node0.nr_slab_unreclaimable
    675569 ±  3%     +12.7%     761135 ±  4%     +18.8%     802726 ±  4%  numa-vmstat.node0.numa_local
     71537 ±  5%      -7.9%      65920 ±  2%    -100.0%       0.25 ±173%  numa-vmstat.node0.numa_other
     13076 ±  3%      -2.3%      12778 ±  4%     -85.9%       1844 ±  0%  numa-vmstat.node1.nr_active_file
      1187 ± 86%     -89.3%     127.50 ± 41%     -91.0%     107.25 ± 60%  numa-vmstat.node1.nr_inactive_anon
     47530 ±  1%      +0.6%      47827 ±  1%     +24.1%      58985 ±  0%  numa-vmstat.node1.nr_inactive_file
     17456 ±  4%     -11.7%      15405 ±  3%      -1.9%      17127 ±  3%  numa-vmstat.node1.nr_slab_unreclaimable
    695848 ±  3%     +14.9%     799683 ±  5%      +4.7%     728368 ±  3%  numa-vmstat.node1.numa_hit
    677405 ±  4%     +14.5%     775903 ±  6%      +7.5%     728368 ±  3%  numa-vmstat.node1.numa_local
     18442 ± 19%     +28.9%      23779 ±  5%    -100.0%       0.00 ± -1%  numa-vmstat.node1.numa_other
     25508 ±  0%      -0.1%      25473 ±  1%     -85.6%       3684 ±  0%  proc-vmstat.nr_active_file
     95727 ±  0%      +0.0%      95730 ±  0%     +23.2%     117947 ±  0%  proc-vmstat.nr_inactive_file
     35841 ±  0%     -12.0%      31543 ±  0%      -1.5%      35298 ±  0%  proc-vmstat.nr_slab_unreclaimable
    154090 ±  2%     +43.1%     220509 ±  3%     +23.5%     190284 ±  0%  proc-vmstat.numa_hint_faults
    129240 ±  2%     +47.4%     190543 ±  3%     +15.1%     148733 ±  1%  proc-vmstat.numa_hint_faults_local
   2238386 ±  1%     +18.4%    2649737 ±  2%     +15.8%    2591197 ±  0%  proc-vmstat.numa_hit
   2232163 ±  1%     +18.4%    2643105 ±  2%     +16.1%    2591195 ±  0%  proc-vmstat.numa_local
      6223 ±  0%      +6.6%       6632 ± 10%    -100.0%       1.25 ± 34%  proc-vmstat.numa_other
     22315 ±  1%     -21.0%      17625 ±  5%      -0.4%      22234 ±  0%  proc-vmstat.numa_pages_migrated
    154533 ±  2%     +45.6%     225071 ±  3%     +25.7%     194235 ±  0%  proc-vmstat.numa_pte_updates
     14224 ±  0%      +5.5%      15006 ±  3%     -17.8%      11689 ±  0%  proc-vmstat.pgactivate
    382980 ±  2%     +33.2%     510157 ±  4%     +22.0%     467358 ±  0%  proc-vmstat.pgalloc_dma32
   7311738 ±  2%     +37.2%   10029060 ±  2%     +28.2%    9374740 ±  0%  proc-vmstat.pgalloc_normal
   7672040 ±  2%     +37.1%   10519738 ±  2%     +28.0%    9823026 ±  0%  proc-vmstat.pgfree
     22315 ±  1%     -21.0%      17625 ±  5%      -0.4%      22234 ±  0%  proc-vmstat.pgmigrate_success
    720.75 ±  3%     -11.3%     639.50 ±  1%     -29.2%     510.00 ±  0%  slabinfo.RAW.active_objs
    720.75 ±  3%     -11.3%     639.50 ±  1%     -29.2%     510.00 ±  0%  slabinfo.RAW.num_objs
      5487 ±  6%     -12.6%       4797 ±  4%    -100.0%       0.00 ± -1%  slabinfo.UNIX.active_objs
    164.50 ±  5%     -12.3%     144.25 ±  4%    -100.0%       0.00 ± -1%  slabinfo.UNIX.active_slabs
      5609 ±  5%     -12.2%       4926 ±  4%    -100.0%       0.00 ± -1%  slabinfo.UNIX.num_objs
    164.50 ±  5%     -12.3%     144.25 ±  4%    -100.0%       0.00 ± -1%  slabinfo.UNIX.num_slabs
      4362 ±  4%     +14.6%       4998 ±  2%      -3.2%       4223 ±  4%  slabinfo.cred_jar.active_objs
      4362 ±  4%     +14.6%       4998 ±  2%      -3.2%       4223 ±  4%  slabinfo.cred_jar.num_objs
      2904 ±  4%      -2.7%       2825 ±  1%     +56.5%       4545 ±  2%  slabinfo.kmalloc-1024.active_objs
      2935 ±  2%      -0.5%       2920 ±  1%     +57.8%       4633 ±  2%  slabinfo.kmalloc-1024.num_objs
     42525 ±  0%     -41.6%      24824 ±  3%      +7.3%      45621 ±  0%  slabinfo.kmalloc-256.active_objs
    845.50 ±  0%     -42.9%     482.50 ±  3%      +3.0%     870.50 ±  0%  slabinfo.kmalloc-256.active_slabs
     54124 ±  0%     -42.9%      30920 ±  3%      +3.0%      55755 ±  0%  slabinfo.kmalloc-256.num_objs
    845.50 ±  0%     -42.9%     482.50 ±  3%      +3.0%     870.50 ±  0%  slabinfo.kmalloc-256.num_slabs
     47204 ±  0%     -37.9%      29335 ±  2%      +6.6%      50334 ±  0%  slabinfo.kmalloc-512.active_objs
    915.25 ±  0%     -39.8%     551.00 ±  3%      +2.8%     940.50 ±  0%  slabinfo.kmalloc-512.active_slabs
     58599 ±  0%     -39.8%      35300 ±  3%      +2.8%      60224 ±  0%  slabinfo.kmalloc-512.num_objs
    915.25 ±  0%     -39.8%     551.00 ±  3%      +2.8%     940.50 ±  0%  slabinfo.kmalloc-512.num_slabs
     12443 ±  2%     -20.1%       9944 ±  3%      -6.5%      11639 ±  1%  slabinfo.pid.active_objs
     12443 ±  2%     -20.1%       9944 ±  3%      -6.5%      11639 ±  1%  slabinfo.pid.num_objs
    440.00 ±  5%     -32.8%     295.75 ±  4%     -11.7%     388.50 ±  7%  slabinfo.taskstats.active_objs
    440.00 ±  5%     -32.8%     295.75 ±  4%     -11.7%     388.50 ±  7%  slabinfo.taskstats.num_objs
    188235 ± 74%     +62.9%     306699 ± 27%     -98.6%       2627 ± 40%  sched_debug.cfs_rq:/.MIN_vruntime.avg
   7146629 ± 80%     +27.7%    9122933 ± 36%     -98.6%      98261 ± 36%  sched_debug.cfs_rq:/.MIN_vruntime.max
   1117852 ± 77%     +44.7%    1617052 ± 31%     -98.6%      15548 ± 37%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
     61.52 ±116%     -70.6%      18.11 ±  6%  +1.2e+06%     718736 ±  1%  sched_debug.cfs_rq:/.load.avg
      2144 ±161%     -96.3%      79.41 ± 48%  +49309.2%    1059411 ±  3%  sched_debug.cfs_rq:/.load.max
    312.45 ±157%     -94.8%      16.29 ± 33%  +1.1e+05%     333106 ±  5%  sched_debug.cfs_rq:/.load.stddev
     20.46 ±  4%      +9.0%      22.31 ±  6%   +3004.0%     635.15 ±  1%  sched_debug.cfs_rq:/.load_avg.avg
     81.57 ± 32%     +14.2%      93.18 ± 26%   +1035.5%     926.18 ±  3%  sched_debug.cfs_rq:/.load_avg.max
      8.14 ±  5%      -2.8%       7.91 ±  3%   +2585.8%     218.52 ± 13%  sched_debug.cfs_rq:/.load_avg.min
     13.90 ± 29%     +16.9%      16.25 ± 22%   +1089.3%     165.34 ±  5%  sched_debug.cfs_rq:/.load_avg.stddev
    188235 ± 74%     +62.9%     306699 ± 27%     -98.6%       2627 ± 40%  sched_debug.cfs_rq:/.max_vruntime.avg
   7146629 ± 80%     +27.7%    9122933 ± 36%     -98.6%      98261 ± 36%  sched_debug.cfs_rq:/.max_vruntime.max
   1117852 ± 77%     +44.7%    1617052 ± 31%     -98.6%      15548 ± 37%  sched_debug.cfs_rq:/.max_vruntime.stddev
  29491781 ±  0%      -4.8%   28074842 ±  1%     -99.0%     295426 ±  0%  sched_debug.cfs_rq:/.min_vruntime.avg
  31241540 ±  0%      -5.8%   29418054 ±  0%     -99.0%     320734 ±  0%  sched_debug.cfs_rq:/.min_vruntime.max
  27849652 ±  0%      -3.7%   26821072 ±  2%     -99.0%     275550 ±  0%  sched_debug.cfs_rq:/.min_vruntime.min
    861989 ±  3%     -20.2%     687639 ± 22%     -98.3%      14586 ±  2%  sched_debug.cfs_rq:/.min_vruntime.stddev
      0.27 ±  5%     -56.3%       0.12 ± 30%     +27.5%       0.34 ±  6%  sched_debug.cfs_rq:/.nr_running.stddev
     16.51 ±  1%      +9.5%      18.08 ±  3%   +3343.1%     568.61 ±  2%  sched_debug.cfs_rq:/.runnable_load_avg.avg
     34.80 ± 13%     +15.0%      40.02 ± 19%   +2514.0%     909.57 ±  0%  sched_debug.cfs_rq:/.runnable_load_avg.max
      0.05 ±100%   +7950.0%       3.66 ± 48%   +3250.0%       1.52 ± 89%  sched_debug.cfs_rq:/.runnable_load_avg.min
      7.18 ±  9%      -0.1%       7.18 ± 13%   +3571.2%     263.68 ±  4%  sched_debug.cfs_rq:/.runnable_load_avg.stddev
   -740916 ±-28%    -158.5%     433310 ±120%     -96.8%     -23579 ± -5%  sched_debug.cfs_rq:/.spread0.avg
   1009940 ± 19%     +75.8%    1775442 ± 30%     -99.8%       1736 ±164%  sched_debug.cfs_rq:/.spread0.max
  -2384171 ± -7%     -65.7%    -818684 ±-76%     -98.2%     -43456 ± -4%  sched_debug.cfs_rq:/.spread0.min
    862765 ±  3%     -20.4%     686825 ± 22%     -98.3%      14591 ±  2%  sched_debug.cfs_rq:/.spread0.stddev
    749.14 ±  1%     +13.0%     846.34 ±  1%     -41.1%     441.05 ±  5%  sched_debug.cfs_rq:/.util_avg.min
     51.66 ±  4%     -36.3%      32.92 ±  5%    +150.6%     129.46 ±  6%  sched_debug.cfs_rq:/.util_avg.stddev
    161202 ±  7%     -41.7%      93997 ±  4%    +147.7%     399342 ±  1%  sched_debug.cpu.avg_idle.avg
    595158 ±  6%     -51.2%     290491 ± 22%     +37.8%     820120 ±  0%  sched_debug.cpu.avg_idle.max
      7658 ± 51%      +9.2%       8366 ± 26%    +114.4%      16423 ± 31%  sched_debug.cpu.avg_idle.min
    132760 ±  8%     -58.8%      54718 ± 19%     +97.8%     262608 ±  0%  sched_debug.cpu.avg_idle.stddev
     11.40 ± 11%    +111.0%      24.05 ± 16%     -58.1%       4.78 ±  3%  sched_debug.cpu.clock.stddev
     11.40 ± 11%    +111.0%      24.05 ± 16%     -58.1%       4.78 ±  3%  sched_debug.cpu.clock_task.stddev
     16.59 ±  1%      +7.7%      17.86 ±  2%   +3099.8%     530.73 ±  2%  sched_debug.cpu.cpu_load[0].avg
     32.34 ±  2%     +23.9%      40.07 ± 19%   +2715.0%     910.41 ±  0%  sched_debug.cpu.cpu_load[0].max
      0.34 ±103%    +520.0%       2.11 ± 67%    +140.0%       0.82 ±110%  sched_debug.cpu.cpu_load[0].min
      6.87 ±  3%      +8.0%       7.42 ± 13%   +4228.9%     297.50 ±  3%  sched_debug.cpu.cpu_load[0].stddev
     16.56 ±  0%      +8.1%      17.91 ±  2%   +3703.9%     630.04 ±  1%  sched_debug.cpu.cpu_load[1].avg
     32.18 ±  2%     +22.7%      39.50 ± 17%   +2728.5%     910.25 ±  0%  sched_debug.cpu.cpu_load[1].max
      3.32 ±  8%     +84.9%       6.14 ± 12%   +5364.4%     181.32 ±  9%  sched_debug.cpu.cpu_load[1].min
      6.14 ±  5%     +12.5%       6.91 ± 13%   +2708.6%     172.56 ±  5%  sched_debug.cpu.cpu_load[1].stddev
     16.75 ±  1%      +7.6%      18.02 ±  2%   +3646.9%     627.69 ±  1%  sched_debug.cpu.cpu_load[2].avg
     33.25 ±  7%     +16.5%      38.75 ± 14%   +2634.1%     909.09 ±  0%  sched_debug.cpu.cpu_load[2].max
      5.39 ±  7%     +36.3%       7.34 ±  4%   +3547.3%     196.45 ± 11%  sched_debug.cpu.cpu_load[2].min
      5.95 ±  9%     +11.8%       6.65 ± 11%   +2752.1%     169.73 ±  5%  sched_debug.cpu.cpu_load[2].stddev
     17.17 ±  1%      +6.1%      18.22 ±  2%   +3552.1%     626.96 ±  1%  sched_debug.cpu.cpu_load[3].avg
     33.20 ±  7%     +14.6%      38.05 ±  9%   +2631.3%     906.93 ±  0%  sched_debug.cpu.cpu_load[3].max
      6.93 ±  7%     +10.5%       7.66 ±  1%   +2766.9%     198.73 ± 11%  sched_debug.cpu.cpu_load[3].min
      5.70 ±  9%     +13.9%       6.49 ±  8%   +2825.6%     166.73 ±  5%  sched_debug.cpu.cpu_load[3].stddev
     17.49 ±  0%      +4.9%      18.36 ±  2%   +3482.1%     626.64 ±  1%  sched_debug.cpu.cpu_load[4].avg
     33.18 ±  3%     +14.0%      37.82 ±  5%   +2615.8%     901.16 ±  0%  sched_debug.cpu.cpu_load[4].max
      7.66 ±  8%      +0.9%       7.73 ±  1%   +2568.8%     204.41 ± 11%  sched_debug.cpu.cpu_load[4].min
      5.56 ±  6%     +16.2%       6.45 ±  6%   +2814.9%     161.96 ±  6%  sched_debug.cpu.cpu_load[4].stddev
     16741 ±  0%     -15.4%      14166 ±  2%     -13.0%      14564 ±  2%  sched_debug.cpu.curr->pid.avg
     19196 ±  0%     -18.3%      15690 ±  1%      -4.9%      18255 ±  0%  sched_debug.cpu.curr->pid.max
      5174 ±  5%     -55.4%       2305 ± 14%     +19.3%       6173 ±  6%  sched_debug.cpu.curr->pid.stddev
     18.60 ±  5%      -2.7%      18.10 ±  6%  +3.9e+06%     717646 ±  2%  sched_debug.cpu.load.avg
     81.23 ± 48%      -2.4%      79.30 ± 47%  +1.3e+06%    1059340 ±  3%  sched_debug.cpu.load.max
     18.01 ± 28%      -9.4%      16.32 ± 33%  +1.9e+06%     333436 ±  5%  sched_debug.cpu.load.stddev
      0.00 ±  2%     +29.8%       0.00 ± 33%     +39.0%       0.00 ± 15%  sched_debug.cpu.next_balance.stddev
      1410 ±  1%     -14.2%       1210 ±  6%     +34.5%       1896 ±  1%  sched_debug.cpu.nr_load_updates.stddev
      9.95 ±  3%     -14.5%       8.51 ±  5%      -1.2%       9.83 ±  2%  sched_debug.cpu.nr_running.avg
     29.07 ±  2%     -15.0%      24.70 ±  4%     +37.5%      39.98 ±  1%  sched_debug.cpu.nr_running.max
      0.05 ±100%    +850.0%       0.43 ± 37%    -100.0%       0.00 ± -1%  sched_debug.cpu.nr_running.min
      7.64 ±  3%     -23.0%       5.88 ±  2%     +48.6%      11.36 ±  2%  sched_debug.cpu.nr_running.stddev
  10979930 ±  1%    +123.3%   24518490 ±  2%     -26.3%    8091669 ±  1%  sched_debug.cpu.nr_switches.avg
  12350130 ±  1%    +117.5%   26856375 ±  2%     -17.0%   10249081 ±  2%  sched_debug.cpu.nr_switches.max
   9594835 ±  2%    +132.6%   22314436 ±  2%     -31.0%    6620975 ±  2%  sched_debug.cpu.nr_switches.min
    769296 ±  1%     +56.8%    1206190 ±  3%     +54.6%    1189172 ±  1%  sched_debug.cpu.nr_switches.stddev
      8.30 ± 18%     +32.9%      11.02 ± 15%    +113.7%      17.73 ± 26%  sched_debug.cpu.nr_uninterruptible.max
      4.87 ± 15%     +14.3%       5.57 ±  6%     +97.2%       9.61 ± 29%  sched_debug.cpu.nr_uninterruptible.stddev

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ