lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20180914090118.GG7632@shao2-debian>
Date:   Fri, 14 Sep 2018 17:01:18 +0800
From:   kernel test robot <rong.a.chen@...el.com>
To:     Juergen Gross <jgross@...e.com>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Stephen Rothwell <sfr@...b.auug.org.au>, lkp@...org
Subject: [LKP] [x86/xen]  f030aade91:  will-it-scale.per_process_ops 5.6%
 improvement

Greeting,

FYI, we noticed a 5.6% improvement of will-it-scale.per_process_ops due to commit:


commit: f030aade9165080f3539fb86fc2ce9ffc391813c ("x86/xen: Move pv specific parts of arch/x86/xen/mmu.c to mmu_pv.c")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master

in testcase: will-it-scale
on test machine: 8 threads Ivy Bridge with 16G memory
with following parameters:

	nr_task: 100%
	mode: process
	test: poll2
	ucode: 0x20
	cpufreq_governor: performance

test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale



Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
  gcc-7/performance/x86_64-rhel-7.2/process/100%/debian-x86_64-2018-04-03.cgz/lkp-ivb-d01/poll2/will-it-scale/0x20

commit: 
  28c11b0f79 ("x86/xen: Move pv irq related functions under CONFIG_XEN_PV umbrella")
  f030aade91 ("x86/xen: Move pv specific parts of arch/x86/xen/mmu.c to mmu_pv.c")

28c11b0f798c6727 f030aade9165080f3539fb86fc 
---------------- -------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
           :4           25%           1:4     dmesg.RIP:__do_page_fault
          1:4          -25%            :4     dmesg.RIP:__pte_alloc
           :4           25%           1:4     dmesg.RIP:skb_copy_datagram_iter
          1:4          -25%            :4     kmsg.c91>]usb_hcd_irq
          1:4          -25%            :4     kmsg.d7641b>]usb_hcd_irq
           :4           25%           1:4     kmsg.dea1619>]usb_hcd_irq
           :4           25%           1:4     kmsg.ef1c654>]usb_hcd_irq
          1:4          -25%            :4     kmsg.effc28>]usb_hcd_irq
           :4           25%           1:4     kmsg.f222649>]usb_hcd_irq
           :4           25%           1:4     kmsg.f7cf435>]usb_hcd_irq
          0:4            0%           0:4     perf-profile.children.cycles-pp.poll_select_set_timeout
          0:4            0%           0:4     perf-profile.self.cycles-pp.poll_select_set_timeout
         %stddev     %change         %stddev
             \          |                \  
    401137            +5.6%     423557        will-it-scale.per_process_ops
   3209103            +5.6%    3388457        will-it-scale.workload
      5.70            -1.7%       5.61        boot-time.dhcp
      0.00 ± 17%      -0.0        0.00 ± 38%  mpstat.cpu.soft%
      3839 ± 90%    +304.9%      15546 ± 55%  cpuidle.C3.usage
      7549 ±  3%    +124.5%      16951 ± 51%  cpuidle.C6.usage
      0.82            -7.4%       0.76 ±  2%  perf-stat.cpi
      8419            +7.3%       9036 ±  4%  perf-stat.instructions-per-iTLB-miss
      1.21            +8.0%       1.31 ±  2%  perf-stat.ipc
     42479 ±  6%     +10.9%      47103 ±  5%  sched_debug.cfs_rq:/.load.stddev
    731344 ± 12%     -32.9%     490912 ± 10%  sched_debug.cpu.avg_idle.avg
     42994 ±  6%      +9.8%      47202 ±  5%  sched_debug.cpu.load.stddev
      3830 ± 91%    +305.6%      15535 ± 55%  turbostat.C3
      7544 ±  3%    +124.6%      16944 ± 51%  turbostat.C6
      0.08 ±101%    +500.0%       0.51 ± 61%  turbostat.CPU%c3
     46.98            +1.6%      47.76        turbostat.CorWatt
      0.02 ±100%   +1375.0%       0.29 ± 80%  turbostat.Pkg%pc3
     51.87            +1.5%      52.65        turbostat.PkgWatt
     46.78            -4.4       42.40        perf-profile.calltrace.cycles-pp.__fget_light.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
     86.64            -0.9       85.77        perf-profile.calltrace.cycles-pp.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
     89.84            -0.7       89.16        perf-profile.calltrace.cycles-pp.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
     91.35            -0.6       90.75        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
     91.68            -0.6       91.12        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
      0.70 ±  5%      +0.1        0.77 ±  4%  perf-profile.calltrace.cycles-pp.__might_fault._copy_from_user.do_sys_poll.__x64_sys_poll.do_syscall_64
      3.27 ±  2%      +0.2        3.43 ±  3%  perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string._copy_from_user.do_sys_poll.__x64_sys_poll.do_syscall_64
      3.66            +0.2        3.87        perf-profile.calltrace.cycles-pp.__entry_SYSCALL_64_trampoline
      3.67            +0.3        3.94        perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
      4.25 ±  2%      +0.3        4.52 ±  2%  perf-profile.calltrace.cycles-pp._copy_from_user.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
     44.61            -4.6       39.99        perf-profile.children.cycles-pp.__fget_light
     89.08            -0.7       88.36        perf-profile.children.cycles-pp.do_sys_poll
     89.92            -0.7       89.23        perf-profile.children.cycles-pp.__x64_sys_poll
     91.42            -0.6       90.82        perf-profile.children.cycles-pp.do_syscall_64
     91.76            -0.5       91.21        perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
      3.29 ±  2%      +0.2        3.45 ±  3%  perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
      3.75            +0.2        3.95        perf-profile.children.cycles-pp.__entry_SYSCALL_64_trampoline
      4.32 ±  2%      +0.3        4.59        perf-profile.children.cycles-pp._copy_from_user
      4.18            +0.3        4.48        perf-profile.children.cycles-pp.syscall_return_via_sysret
      4.54 ±  2%      +0.4        4.94        perf-profile.children.cycles-pp.__fdget
     42.30            -4.9       37.43        perf-profile.self.cycles-pp.__fget_light
      0.42 ±  4%      +0.0        0.46 ±  4%  perf-profile.self.cycles-pp.___might_sleep
      0.35            +0.1        0.42 ±  3%  perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
      3.24 ±  2%      +0.2        3.40 ±  3%  perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
      3.75            +0.2        3.95        perf-profile.self.cycles-pp.__entry_SYSCALL_64_trampoline
      2.24            +0.3        2.50 ±  3%  perf-profile.self.cycles-pp.__fdget
      4.17 ±  2%      +0.3        4.47        perf-profile.self.cycles-pp.syscall_return_via_sysret
     36.22            +3.5       39.71        perf-profile.self.cycles-pp.do_sys_poll


                                                                                
                            will-it-scale.per_process_ops                       
                                                                                
  440000 +-+----------------------------------------------------------------+   
         |                                                                  |   
  430000 +-+  O  O O                                         O    O      O  |   
  420000 O-+O         O O  O         O  O  O O    O  O O  O    O    O  O    O   
         |                    O O  O            O                           |   
  410000 +-+                                                                |   
         |                   .+.                +.+..  +                    |   
  400000 +-+ .+.. .+.. .+..+.   +..             :     +                     |   
         |..+    +    +            +.+     +   :     +                      |   
  390000 +-+                          :   ::   :                            |   
  380000 +-+                          :   : : :                             |   
         |                             : :  : :                             |   
  370000 +-+                           : :   :                              |   
         |                              +    +                              |   
  360000 +-+----------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong, Chen

View attachment "config-4.19.0-rc2-00006-gf030aad" of type "text/plain" (167672 bytes)

View attachment "job-script" of type "text/plain" (6990 bytes)

View attachment "job.yaml" of type "text/plain" (4609 bytes)

View attachment "reproduce" of type "text/plain" (308 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ