lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160727015425.GL21454@yexl-desktop>
Date:	Wed, 27 Jul 2016 09:54:26 +0800
From:	kernel test robot <xiaolong.ye@...el.com>
To:	Xin Long <lucien.xin@...il.com>
Cc:	"David S. Miller" <davem@...emloft.net>,
	LKML <linux-kernel@...r.kernel.org>,
	Stephen Rothwell <sfr@...b.auug.org.au>, lkp@...org
Subject: [lkp] [sctp]  a6c2f79287: netperf.Throughput_Mbps -37.2% regression


FYI, we noticed a -37.2% regression of netperf.Throughput_Mbps due to commit:

commit a6c2f792873aff332a4689717c3cd6104f46684c ("sctp: implement prsctp TTL policy")
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master

in testcase: netperf
on test machine: 4 threads Ivy Bridge with 8G memory
with following parameters:

	ip: ipv4
	runtime: 300s
	nr_threads: 200%
	cluster: cs-localhost
	send_size: 10K
	test: SCTP_STREAM_MANY
	cpufreq_governor: performance



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/send_size/tbox_group/test/testcase:
  cs-localhost/gcc-6/performance/ipv4/x86_64-rhel-7.2/200%/debian-x86_64-2015-02-07.cgz/300s/10K/lkp-ivb-d02/SCTP_STREAM_MANY/netperf

commit: 
  826d253d57 ("sctp: add SCTP_PR_ASSOC_STATUS on sctp sockopt")
  a6c2f79287 ("sctp: implement prsctp TTL policy")

826d253d57b11f69 a6c2f792873aff332a4689717c 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      3923 ±  0%     -37.2%       2462 ±  0%  netperf.Throughput_Mbps
     11701 ± 14%     -21.9%       9139 ± 11%  meminfo.AnonHugePages
    642.75 ±  7%     +15.8%     744.00 ±  6%  slabinfo.kmalloc-512.active_objs
      4993 ±  2%     +14.6%       5724 ±  2%  slabinfo.kmalloc-64.active_objs
      4993 ±  2%     +14.6%       5724 ±  2%  slabinfo.kmalloc-64.num_objs
      9.00 ±  0%     -77.8%       2.00 ±  0%  vmstat.procs.r
    112616 ±  1%     +19.0%     133959 ±  0%  vmstat.system.cs
      4053 ±  0%      +6.3%       4309 ±  0%  vmstat.system.in
  16466114 ±  0%     -37.4%   10307354 ±  0%  softirqs.NET_RX
     72067 ± 10%     -63.5%      26326 ±  4%  softirqs.RCU
      8598 ±  4%    +954.6%      90677 ±  1%  softirqs.SCHED
    605899 ±  0%     -45.7%     329111 ±  0%  softirqs.TIMER
      2529 ±  4%     -15.1%       2147 ±  1%  proc-vmstat.nr_alloc_batch
  1.48e+08 ±  0%     -37.1%   93059843 ±  0%  proc-vmstat.numa_hit
  1.48e+08 ±  0%     -37.1%   93059842 ±  0%  proc-vmstat.numa_local
 3.742e+08 ±  1%     -36.9%  2.362e+08 ±  0%  proc-vmstat.pgalloc_dma32
 4.733e+08 ±  0%     -36.6%      3e+08 ±  0%  proc-vmstat.pgalloc_normal
 8.476e+08 ±  0%     -36.7%  5.362e+08 ±  0%  proc-vmstat.pgfree
     99.31 ±  0%     -44.6%      55.03 ±  0%  turbostat.%Busy
      3269 ±  0%     -44.8%       1804 ±  1%  turbostat.Avg_MHz
      0.05 ± 17%  +51942.1%      24.72 ±  1%  turbostat.CPU%c1
      0.64 ±  0%   +3067.3%      20.11 ±  3%  turbostat.CPU%c6
     20.20 ±  0%     -24.7%      15.21 ±  0%  turbostat.CorWatt
      0.12 ± 39%   +1906.4%       2.36 ±  3%  turbostat.Pkg%pc2
      0.46 ± 10%   +1696.7%       8.27 ±  6%  turbostat.Pkg%pc6
     37.54 ±  0%     -14.5%      32.11 ±  0%  turbostat.PkgWatt
     76510 ± 46%  +2.6e+05%  1.954e+08 ±  1%  cpuidle.C1-IVB.time
      6742 ± 76%  +2.9e+05%   19860856 ±  1%  cpuidle.C1-IVB.usage
     19769 ± 17%   +5414.4%    1090172 ±  3%  cpuidle.C1E-IVB.time
    151.00 ± 11%   +4024.8%       6228 ±  3%  cpuidle.C1E-IVB.usage
     33074 ± 14%   +5086.6%    1715442 ±  2%  cpuidle.C3-IVB.time
    114.50 ± 14%   +6075.1%       7070 ±  3%  cpuidle.C3-IVB.usage
   8006184 ±  0%   +4066.3%  3.336e+08 ±  2%  cpuidle.C6-IVB.time
      8874 ±  0%   +4197.0%     381350 ±  2%  cpuidle.C6-IVB.usage
    476.75 ± 92%  +77822.9%     371497 ± 12%  cpuidle.POLL.time
     46.25 ±144%  +30824.3%      14302 ±  3%  cpuidle.POLL.usage
 1.758e+11 ±  2%     -38.4%  1.083e+11 ±  1%  perf-stat.L1-dcache-load-misses
 6.405e+11 ±  0%     -29.7%    4.5e+11 ±  0%  perf-stat.L1-dcache-loads
  6.28e+09 ±  3%     -30.0%  4.398e+09 ±  0%  perf-stat.L1-dcache-prefetch-misses
 8.142e+10 ±  1%     -39.3%  4.941e+10 ±  0%  perf-stat.L1-dcache-store-misses
  5.32e+11 ±  0%     -33.6%  3.533e+11 ±  1%  perf-stat.L1-dcache-stores
  5.59e+10 ±  1%     -19.3%  4.512e+10 ±  1%  perf-stat.L1-icache-load-misses
 5.199e+10 ±  0%     -42.1%   3.01e+10 ±  1%  perf-stat.LLC-loads
 3.066e+10 ±  1%     -42.4%  1.765e+10 ±  1%  perf-stat.LLC-prefetches
 5.282e+10 ±  0%     -40.0%  3.168e+10 ±  1%  perf-stat.LLC-stores
 2.776e+11 ±  0%     -27.9%  2.002e+11 ±  0%  perf-stat.branch-instructions
 3.861e+09 ±  1%     -50.8%    1.9e+09 ±  0%  perf-stat.branch-load-misses
 2.772e+11 ±  0%     -27.8%      2e+11 ±  0%  perf-stat.branch-loads
 3.864e+09 ±  1%     -50.8%    1.9e+09 ±  0%  perf-stat.branch-misses
 1.179e+11 ±  0%     -43.9%   6.61e+10 ±  0%  perf-stat.bus-cycles
 7.126e+09 ± 16%     -96.8%  2.272e+08 ±  4%  perf-stat.cache-misses
 1.173e+11 ±  0%     -38.0%  7.278e+10 ±  0%  perf-stat.cache-references
  34232822 ±  1%     +19.1%   40787311 ±  0%  perf-stat.context-switches
 3.897e+12 ±  0%     -44.2%  2.174e+12 ±  1%  perf-stat.cpu-cycles
     12019 ± 35%    +306.6%      48867 ±  1%  perf-stat.cpu-migrations
 4.069e+09 ± 20%     -58.4%  1.694e+09 ± 46%  perf-stat.dTLB-load-misses
 6.421e+11 ±  0%     -30.3%  4.476e+11 ±  1%  perf-stat.dTLB-loads
 5.285e+08 ± 22%     -71.0%  1.531e+08 ± 17%  perf-stat.dTLB-store-misses
  5.32e+11 ±  0%     -33.4%  3.544e+11 ±  1%  perf-stat.dTLB-stores
 3.735e+08 ±  5%     -48.5%  1.923e+08 ±  3%  perf-stat.iTLB-load-misses
 1.803e+08 ± 52%    +662.4%  1.374e+09 ±  2%  perf-stat.iTLB-loads
 1.505e+12 ±  0%     -29.3%  1.064e+12 ±  0%  perf-stat.instructions
    339045 ±  0%      +4.4%     354068 ±  0%  perf-stat.minor-faults
    339041 ±  0%      +4.4%     354062 ±  0%  perf-stat.page-faults
 3.892e+12 ±  0%     -43.9%  2.182e+12 ±  0%  perf-stat.ref-cycles
 3.052e+12 ±  0%     -25.9%  2.261e+12 ±  0%  perf-stat.stalled-cycles-frontend
     34082 ± 16%     -65.5%      11746 ± 83%  sched_debug.cfs_rq:/.MIN_vruntime.avg
     75047 ±  3%     -58.2%      31366 ± 63%  sched_debug.cfs_rq:/.MIN_vruntime.max
     34069 ±  3%     -59.9%      13660 ± 61%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
   1403286 ±  4%     -34.9%     913870 ± 14%  sched_debug.cfs_rq:/.load.avg
   1874082 ±  7%     -27.9%    1352098 ± 19%  sched_debug.cfs_rq:/.load.max
   1041057 ±  0%     -66.8%     345620 ± 35%  sched_debug.cfs_rq:/.load.min
      1063 ±  0%     -54.6%     482.93 ± 15%  sched_debug.cfs_rq:/.load_avg.avg
      1204 ±  3%     -20.6%     956.50 ±  4%  sched_debug.cfs_rq:/.load_avg.max
    939.38 ±  0%     -86.4%     127.33 ± 31%  sched_debug.cfs_rq:/.load_avg.min
    104.33 ± 19%    +238.6%     353.26 ±  6%  sched_debug.cfs_rq:/.load_avg.stddev
     34082 ± 16%     -65.5%      11746 ± 83%  sched_debug.cfs_rq:/.max_vruntime.avg
     75047 ±  3%     -58.2%      31366 ± 63%  sched_debug.cfs_rq:/.max_vruntime.max
     34069 ±  3%     -59.9%      13660 ± 61%  sched_debug.cfs_rq:/.max_vruntime.stddev
     74820 ±  0%     -10.4%      67031 ±  1%  sched_debug.cfs_rq:/.min_vruntime.min
      1091 ±  7%    +378.3%       5221 ± 13%  sched_debug.cfs_rq:/.min_vruntime.stddev
      1.35 ±  3%     -35.4%       0.88 ± 14%  sched_debug.cfs_rq:/.nr_running.avg
      1.83 ±  6%     -29.5%       1.29 ± 19%  sched_debug.cfs_rq:/.nr_running.max
      1.00 ±  0%     -66.7%       0.33 ± 35%  sched_debug.cfs_rq:/.nr_running.min
    943.28 ±  0%     -61.0%     367.80 ± 20%  sched_debug.cfs_rq:/.runnable_load_avg.avg
    991.42 ±  1%     -21.2%     781.00 ± 14%  sched_debug.cfs_rq:/.runnable_load_avg.max
    894.88 ±  1%     -90.9%      81.46 ± 20%  sched_debug.cfs_rq:/.runnable_load_avg.min
     39.07 ± 31%    +667.7%     299.97 ± 16%  sched_debug.cfs_rq:/.runnable_load_avg.stddev
   -794.96 ±-98%    +505.2%      -4810 ±-18%  sched_debug.cfs_rq:/.spread0.avg
     -1935 ±-42%    +442.8%     -10508 ±-13%  sched_debug.cfs_rq:/.spread0.min
      1092 ±  7%    +378.0%       5221 ± 13%  sched_debug.cfs_rq:/.spread0.stddev
    949.60 ±  0%     -41.6%     554.85 ± 12%  sched_debug.cfs_rq:/.util_avg.avg
    976.08 ±  1%      -9.9%     879.71 ±  1%  sched_debug.cfs_rq:/.util_avg.max
    925.83 ±  0%     -67.0%     305.25 ± 20%  sched_debug.cfs_rq:/.util_avg.min
     18.71 ± 30%   +1151.4%     234.18 ±  6%  sched_debug.cfs_rq:/.util_avg.stddev
      2.60 ±  9%     -66.3%       0.88 ± 62%  sched_debug.cpu.clock.stddev
      2.60 ±  9%     -66.3%       0.88 ± 62%  sched_debug.cpu.clock_task.stddev
    941.57 ±  1%     -65.8%     322.38 ± 25%  sched_debug.cpu.cpu_load[0].avg
    992.46 ±  1%     -29.5%     700.08 ± 25%  sched_debug.cpu.cpu_load[0].max
    880.67 ±  4%     -93.3%      58.58 ± 64%  sched_debug.cpu.cpu_load[0].min
     44.14 ± 46%    +526.2%     276.40 ± 28%  sched_debug.cpu.cpu_load[0].stddev
    944.76 ±  0%     -59.6%     381.23 ± 17%  sched_debug.cpu.cpu_load[1].avg
    980.25 ±  1%     -17.5%     808.25 ±  5%  sched_debug.cpu.cpu_load[1].max
    900.79 ±  1%     -89.8%      91.67 ± 28%  sched_debug.cpu.cpu_load[1].min
     30.83 ± 30%    +904.7%     309.72 ±  7%  sched_debug.cpu.cpu_load[1].stddev
    941.95 ±  0%     -61.6%     361.83 ± 18%  sched_debug.cpu.cpu_load[2].avg
    974.08 ±  1%     -18.0%     798.29 ±  5%  sched_debug.cpu.cpu_load[2].max
    899.00 ±  1%     -90.8%      82.42 ± 24%  sched_debug.cpu.cpu_load[2].min
     29.59 ± 30%    +946.9%     309.73 ±  6%  sched_debug.cpu.cpu_load[2].stddev
    935.02 ±  0%     -63.1%     344.78 ± 19%  sched_debug.cpu.cpu_load[3].avg
    965.83 ±  1%     -18.7%     785.50 ±  5%  sched_debug.cpu.cpu_load[3].max
    893.04 ±  1%     -91.8%      73.21 ± 26%  sched_debug.cpu.cpu_load[3].min
     28.77 ± 29%    +976.4%     309.70 ±  6%  sched_debug.cpu.cpu_load[3].stddev
    922.55 ±  0%     -64.2%     330.33 ± 19%  sched_debug.cpu.cpu_load[4].avg
    951.75 ±  1%     -20.0%     761.71 ±  6%  sched_debug.cpu.cpu_load[4].max
    880.96 ±  0%     -92.8%      63.08 ± 31%  sched_debug.cpu.cpu_load[4].min
     27.92 ± 26%    +988.8%     304.04 ±  6%  sched_debug.cpu.cpu_load[4].stddev
      1302 ±  1%     -15.7%       1097 ±  7%  sched_debug.cpu.curr->pid.avg
    798.04 ±  5%     -70.5%     235.67 ± 39%  sched_debug.cpu.curr->pid.min
    812.92 ±  1%     +22.9%     999.29 ±  4%  sched_debug.cpu.curr->pid.stddev
   1359793 ±  6%     -34.4%     892053 ± 17%  sched_debug.cpu.load.avg
   1830834 ±  4%     -28.5%    1308664 ± 19%  sched_debug.cpu.load.max
   1041241 ±  0%     -62.6%     389138 ± 37%  sched_debug.cpu.load.min
      0.00 ± 21%     +77.1%       0.00 ±  7%  sched_debug.cpu.next_balance.stddev
    150630 ±  0%     -24.2%     114118 ±  0%  sched_debug.cpu.nr_load_updates.avg
    151833 ±  0%     -21.6%     119037 ±  0%  sched_debug.cpu.nr_load_updates.max
    149812 ±  0%     -27.3%     108953 ±  1%  sched_debug.cpu.nr_load_updates.min
    745.51 ±  1%    +502.7%       4492 ± 14%  sched_debug.cpu.nr_load_updates.stddev
      2.60 ±  1%     -52.8%       1.23 ± 17%  sched_debug.cpu.nr_running.avg
      3.79 ±  3%     -39.6%       2.29 ± 39%  sched_debug.cpu.nr_running.max
      1.62 ±  8%     -74.4%       0.42 ± 34%  sched_debug.cpu.nr_running.min
   4242583 ±  1%     +20.1%    5094187 ±  0%  sched_debug.cpu.nr_switches.avg
   3880089 ±  1%     +24.6%    4835277 ±  1%  sched_debug.cpu.nr_switches.min
    487080 ± 18%     -61.6%     186796 ± 37%  sched_debug.cpu.nr_switches.stddev



 
                              netperf.Throughput_Mbps

  4000 ++-----------------------------------*--*-----*--*--*--*--*-*--*-----+
       |                     .*..  .*..*..*       *.                        |
  3500 *+.*..*     *.*..*..*.    *.                                         |
  3000 ++    :     :                                                        |
       |      :    O                                                        |
  2500 O+ O  O: O :  O  O  O  O  O  O             O     O  O  O  O O     O  O
       |      :   :                    O  O O  O                      O     |
  2000 ++     :   :                                                         |
       |       :  :                                                         |
  1500 ++      : :                                                          |
  1000 ++      : :                                                          |
       |       : :                                                          |
   500 ++       ::                                                          |
       |        :                                                           |
     0 ++-------*------------------------------------O----------------------+

	[*] bisect-good sample
	[O] bisect-bad  sample





Thanks,
Xiaolong

View attachment "config-4.7.0-rc6-01198-ga6c2f79" of type "text/plain" (151646 bytes)

View attachment "job.yaml" of type "text/plain" (4256 bytes)

View attachment "reproduce" of type "text/plain" (848 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ