lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20191127005459.GE20422@shao2-debian>
Date:   Wed, 27 Nov 2019 08:54:59 +0800
From:   kernel test robot <rong.a.chen@...el.com>
To:     Joseph Lo <josephl@...dia.com>
Cc:     Daniel Lezcano <daniel.lezcano@...aro.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Thierry Reding <treding@...dia.com>,
        Jon Hunter <jonathanh@...dia.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        lkp@...ts.01.org
Subject: [clocksource/drivers/tegra] b4822dc756: stress-ng.sockfd.ops_per_sec
 -14.2% regression

Greeting,

FYI, we noticed a -14.2% regression of stress-ng.sockfd.ops_per_sec due to commit:


commit: b4822dc7564f007e7a9b5188b791b7a923e34104 ("clocksource/drivers/tegra: Add Tegra210 timer support")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:

	nr_threads: 100%
	disk: 1HDD
	testtime: 30s
	class: network
	cpufreq_governor: performance
	ucode: 0x500002c




If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen@...el.com>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
  network/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-csl-2sp5/stress-ng/30s/0x500002c

commit: 
  87e0a45596 ("dt-bindings: timer: add Tegra210 timer")
  b4822dc756 ("clocksource/drivers/tegra: Add Tegra210 timer support")

87e0a455960a383a b4822dc7564f007e7a9b5188b79 
---------------- --------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
           :4           25%           1:4     kmsg.Negotiation_of_local_Allow_Short_Seqnos_failed_in_state_CHANGING_at_net/dccp/feat.c:#/dccp_feat_activate_values()
           :4           25%           1:4     dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
           :4           25%           1:4     dmesg.WARNING:stack_recursion
         %stddev     %change         %stddev
             \          |                \  
    751303           -14.2%     644647        stress-ng.sockfd.ops
     25042           -14.2%      21487        stress-ng.sockfd.ops_per_sec
 1.617e+08            -5.2%  1.533e+08 ±  2%  stress-ng.time.involuntary_context_switches
   1283886            -4.4%    1227018 ±  2%  vmstat.system.cs
     39600 ± 11%     +68.3%      66664 ± 28%  numa-vmstat.node0.nr_anon_pages
     56.75 ± 18%     +94.3%     110.25 ± 37%  numa-vmstat.node0.nr_anon_transparent_hugepages
      0.23 ±  2%      -0.0        0.21 ±  2%  perf-profile.children.cycles-pp.selinux_file_ioctl
      0.23 ±  2%      -0.0        0.22        perf-profile.children.cycles-pp.security_file_ioctl
   1288777            -4.4%    1231929 ±  2%  perf-stat.i.context-switches
   1284857            -4.4%    1228527 ±  2%  perf-stat.ps.context-switches
    376581 ±  5%     +45.0%     545935 ± 25%  meminfo.Active
    375201 ±  5%     +45.1%     544551 ± 25%  meminfo.Active(anon)
    226368 ±  7%     +71.0%     387062 ± 36%  meminfo.AnonHugePages
    116742 ± 17%     +94.1%     226573 ± 36%  numa-meminfo.node0.AnonHugePages
    158230 ± 11%     +68.5%     266666 ± 28%  numa-meminfo.node0.AnonPages
   1370369 ±  6%     +22.8%    1682189 ±  5%  numa-meminfo.node0.MemUsed
     93828 ±  5%     +44.9%     136003 ± 25%  proc-vmstat.nr_active_anon
    110.25 ±  7%     +71.2%     188.75 ± 36%  proc-vmstat.nr_anon_transparent_hugepages
     93828 ±  5%     +44.9%     136003 ± 25%  proc-vmstat.nr_zone_active_anon
    288.50 ± 33%    +341.8%       1274 ± 69%  proc-vmstat.thp_fault_alloc
    702.50 ±  4%     -15.9%     590.75 ±  3%  slabinfo.file_lock_cache.active_objs
    702.50 ±  4%     -15.9%     590.75 ±  3%  slabinfo.file_lock_cache.num_objs
      4859 ± 10%     -22.5%       3767 ± 10%  slabinfo.kmalloc-rcl-64.active_objs
      4859 ± 10%     -22.5%       3767 ± 10%  slabinfo.kmalloc-rcl-64.num_objs
    351.50 ±  5%     +26.9%     446.00 ±  9%  slabinfo.kmem_cache.active_objs
    351.50 ±  5%     +26.9%     446.00 ±  9%  slabinfo.kmem_cache.num_objs
    682.00 ±  4%     +21.1%     825.75 ±  7%  slabinfo.kmem_cache_node.active_objs
    728.00 ±  3%     +19.7%     871.75 ±  7%  slabinfo.kmem_cache_node.num_objs
     35442 ± 46%     -59.5%      14360 ± 32%  interrupts.CPU11.RES:Rescheduling_interrupts
     63378 ± 44%     -57.3%      27081 ± 37%  interrupts.CPU26.RES:Rescheduling_interrupts
     70941 ± 52%     -55.2%      31809 ± 60%  interrupts.CPU33.RES:Rescheduling_interrupts
     77310 ± 36%     -53.1%      36270 ± 61%  interrupts.CPU34.RES:Rescheduling_interrupts
     59102 ± 23%     -60.4%      23424 ± 40%  interrupts.CPU54.RES:Rescheduling_interrupts
     76941 ± 28%     -74.0%      19990 ± 31%  interrupts.CPU56.RES:Rescheduling_interrupts
     87428 ± 42%     -61.3%      33857 ± 68%  interrupts.CPU6.RES:Rescheduling_interrupts
     53133 ± 36%     -53.3%      24808 ± 31%  interrupts.CPU61.RES:Rescheduling_interrupts
     20767 ± 17%    +144.6%      50795 ± 16%  interrupts.CPU63.RES:Rescheduling_interrupts
     16906 ± 31%    +208.0%      52070 ± 44%  interrupts.CPU68.RES:Rescheduling_interrupts
     92040 ± 32%     -67.8%      29683 ± 34%  interrupts.CPU77.RES:Rescheduling_interrupts
     84549 ± 98%     -75.4%      20818 ± 67%  interrupts.CPU78.RES:Rescheduling_interrupts
      4939 ± 34%     +60.7%       7939        interrupts.CPU82.NMI:Non-maskable_interrupts
      4939 ± 34%     +60.7%       7939        interrupts.CPU82.PMI:Performance_monitoring_interrupts
     51353 ± 73%     -60.6%      20254 ± 60%  interrupts.CPU88.RES:Rescheduling_interrupts
   4661542 ±  9%     -22.0%    3635159 ± 12%  interrupts.RES:Rescheduling_interrupts
    149972 ±  3%     +29.7%     194472 ± 18%  softirqs.CPU14.RCU
     18486 ±  4%      +8.6%      20078 ±  9%  softirqs.CPU17.SCHED
   4450692 ±  4%      -8.3%    4081585 ±  4%  softirqs.CPU27.NET_RX
    149037 ±  6%     +40.8%     209788 ±  5%  softirqs.CPU27.RCU
   4598629 ±  6%     -17.5%    3794414 ±  8%  softirqs.CPU31.NET_RX
    180791 ± 22%     -18.4%     147584 ±  3%  softirqs.CPU32.RCU
    193339 ± 15%     -15.3%     163696 ± 19%  softirqs.CPU33.RCU
   3691957 ±  4%     -15.8%    3109708 ± 10%  softirqs.CPU48.NET_RX
     17979           +23.6%      22221 ± 21%  softirqs.CPU48.SCHED
    165183 ± 13%     +31.0%     216456 ± 13%  softirqs.CPU49.RCU
     18655            +7.5%      20058 ±  6%  softirqs.CPU5.SCHED
     18636 ±  6%      +9.7%      20448 ±  8%  softirqs.CPU51.SCHED
     18063           +20.3%      21731 ± 16%  softirqs.CPU57.SCHED
    180802 ± 18%     -12.4%     158328 ± 15%  softirqs.CPU58.RCU
    189662 ± 22%     -24.2%     143744 ±  4%  softirqs.CPU75.RCU
     23583 ±  9%     -18.6%      19208        softirqs.CPU77.SCHED
    153164 ±  6%      -8.3%     140393 ±  3%  softirqs.CPU92.RCU
    113634 ± 39%     -69.1%      35137 ± 45%  sched_debug.cfs_rq:/.MIN_vruntime.avg
    756027 ± 17%     -54.7%     342481 ± 45%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
     15952 ± 14%     -23.8%      12157        sched_debug.cfs_rq:/.load.avg
     41020 ± 24%     -50.2%      20427        sched_debug.cfs_rq:/.load.stddev
    225.50 ±  2%     -12.0%     198.54 ±  2%  sched_debug.cfs_rq:/.load_avg.max
    113634 ± 39%     -69.1%      35137 ± 45%  sched_debug.cfs_rq:/.max_vruntime.avg
    756027 ± 17%     -54.7%     342481 ± 45%  sched_debug.cfs_rq:/.max_vruntime.stddev
      0.11 ±  8%     -31.8%       0.08 ± 16%  sched_debug.cfs_rq:/.nr_running.stddev
     56.83 ±  8%     -46.5%      30.42 ± 18%  sched_debug.cfs_rq:/.runnable_load_avg.max
      7.31 ± 12%     -40.4%       4.35 ± 10%  sched_debug.cfs_rq:/.runnable_load_avg.stddev
     14721 ± 15%     -26.0%      10900        sched_debug.cfs_rq:/.runnable_weight.avg
     41574 ± 24%     -49.7%      20899        sched_debug.cfs_rq:/.runnable_weight.stddev
    337450 ± 29%     -42.4%     194494 ± 34%  sched_debug.cfs_rq:/.spread0.avg
   -171171          +108.7%    -357315        sched_debug.cfs_rq:/.spread0.min
      1566 ±  4%     -16.1%       1314 ±  7%  sched_debug.cfs_rq:/.util_avg.max
      1287 ±  8%     -16.8%       1071 ± 11%  sched_debug.cfs_rq:/.util_est_enqueued.max
      4100 ± 42%    +197.5%      12196 ± 42%  sched_debug.cpu.avg_idle.min
      0.00 ± 37%     -31.1%       0.00 ± 10%  sched_debug.cpu.next_balance.stddev
    793.53 ±  8%     +44.6%       1147 ±  7%  sched_debug.cpu.nr_load_updates.stddev
      4.29 ± 11%     -30.1%       3.00 ± 15%  sched_debug.cpu.nr_running.max
      0.60 ±  7%     -21.7%       0.47 ±  9%  sched_debug.cpu.nr_running.stddev


                                                                                
                                stress-ng.sockfd.ops                            
                                                                                
  800000 +-+----------------------------------------------------------------+   
         |             .+   +.+.                                            |   
  750000 +-+.+.+.+.++.+         +.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.|   
         |                                                                  |   
  700000 +-+                                                                |   
         |            O     O O O O O O O OO O O O                          |   
  650000 +-+            O O                        O O O O O O O OO O O O   |   
         O O O O O O                                                        |   
  600000 +-+                                                                |   
         |                                                                  |   
  550000 +-+                                                                |   
         |                                                                  |   
  500000 +-+                                                                |   
         |          O                                                       |   
  450000 +-+----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                            stress-ng.sockfd.ops_per_sec                        
                                                                                
  28000 +-+-----------------------------------------------------------------+   
        |                +.                                                 |   
  26000 +-+            .+  +.+.                                             |   
        |.+.+.+.+.+.+.+        +.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.|   
  24000 +-+                                                                 |   
        |             O    O O   O         O O O                            |   
  22000 +-+             OO     O   O O O O       O O O O O  O   O   O O     |   
        O O O O O O                                        O  O   O     O   |   
  20000 +-+                                                                 |   
        |                                                                   |   
  18000 +-+                                                                 |   
        |                                                                   |   
  16000 +-+         O                                                       |   
        |                                                                   |   
  14000 +-+-----------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


View attachment "config-5.0.0-rc2-00023-gb4822dc7564f0" of type "text/plain" (187906 bytes)

View attachment "job-script" of type "text/plain" (7840 bytes)

View attachment "job.yaml" of type "text/plain" (5524 bytes)

View attachment "reproduce" of type "text/plain" (388 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ