lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Wed, 30 Mar 2016 13:46:31 +0800
From:	kernel test robot <xiaolong.ye@...el.com>
To:	Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc:	LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp] [mm] 39a1aa8e19: will-it-scale.per_process_ops +5.2%
 improvement

FYI, we noticed that will-it-scale.per_process_ops +5.2% improvement with your commit.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 39a1aa8e194ab67983de3b9d0b204ccee12e689a ("mm: deduplicate memory overcommitment code")


=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
  gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/ivb42/malloc1/will-it-scale

commit: 
  ea606cf5d8df370e7932460dfd960b21f20e7c6d
  39a1aa8e194ab67983de3b9d0b204ccee12e689a

ea606cf5d8df370e 39a1aa8e194ab67983de3b9d0b 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    101461 ±  0%      +5.2%     106703 ±  0%  will-it-scale.per_process_ops
      0.10 ±  0%     +31.8%       0.13 ±  0%  will-it-scale.scalability
      6966 ±  8%      -9.9%       6278 ± 10%  meminfo.AnonHugePages
  62767556 ± 10%     -24.3%   47486848 ±  7%  cpuidle.C3-IVT.time
    232686 ±  5%     -18.4%     189919 ±  9%  cpuidle.C3-IVT.usage
   6823970 ±  3%      -8.9%    6214309 ±  4%  cpuidle.C6-IVT.usage
     66703 ±  0%     +12.2%      74872 ±  2%  numa-vmstat.node0.numa_other
  37441585 ±  0%     +26.4%   47319887 ±  0%  numa-vmstat.node1.numa_hit
  37417000 ±  0%     +26.4%   47303041 ±  0%  numa-vmstat.node1.numa_local
     24584 ±  0%     -31.5%      16845 ± 11%  numa-vmstat.node1.numa_other
      6.15 ± 31%     +36.2%       8.37 ± 15%  sched_debug.cpu.cpu_load[4].stddev
     20260 ±  6%     +13.4%      22965 ±  5%  sched_debug.cpu.nr_switches.min
      4450 ±  9%     +16.5%       5183 ±  8%  sched_debug.cpu.ttwu_local.max
    920.59 ± 10%     +15.0%       1058 ±  7%  sched_debug.cpu.ttwu_local.stddev
  2.59e+08 ±  0%     +13.9%   2.95e+08 ±  0%  numa-numastat.node0.local_node
  2.59e+08 ±  0%     +13.9%   2.95e+08 ±  0%  numa-numastat.node0.numa_hit
     10.25 ±119%  +75378.0%       7736 ± 19%  numa-numastat.node0.other_node
 1.097e+08 ±  0%     +28.0%  1.404e+08 ±  0%  numa-numastat.node1.local_node
 1.097e+08 ±  0%     +28.0%  1.404e+08 ±  0%  numa-numastat.node1.numa_hit
      9285 ±  0%     -83.1%       1568 ± 98%  numa-numastat.node1.other_node
 3.687e+08 ±  0%     +18.1%  4.354e+08 ±  0%  proc-vmstat.numa_hit
 3.687e+08 ±  0%     +18.1%  4.354e+08 ±  0%  proc-vmstat.numa_local
  52281716 ±  0%     +11.1%   58060959 ±  1%  proc-vmstat.pgalloc_dma32
 3.943e+08 ±  0%     +16.7%  4.603e+08 ±  0%  proc-vmstat.pgalloc_normal
 1.854e+08 ±  0%     +18.0%  2.187e+08 ±  0%  proc-vmstat.pgfault
 4.465e+08 ±  0%     +16.1%  5.183e+08 ±  0%  proc-vmstat.pgfree
      0.00 ± -1%      +Inf%       2.36 ± 12%  perf-profile.cycles-pp.__split_vma.isra.36.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
      2.38 ±  8%    -100.0%       0.00 ± -1%  perf-profile.cycles-pp.__split_vma.isra.37.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
      4.37 ± 31%     +32.9%       5.80 ± 15%  perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
      4.37 ± 31%     +32.9%       5.80 ± 15%  perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init.start_kernel
      4.08 ± 34%     +39.8%       5.70 ± 15%  perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init
      0.89 ±  6%     +16.3%       1.04 ±  5%  perf-profile.cycles-pp.perf_event_aux.part.46.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.start_kernel.x86_64_start_reservations.x86_64_start_kernel
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.x86_64_start_kernel
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.x86_64_start_reservations.x86_64_start_kernel


ivb42: Ivytown Ivy Bridge-EP
Memory: 64G



                              will-it-scale.scalability

   0.14 ++-----------OOOO-O-------------------------------------------------+
  0.135 ++               O      OOO                                         |
        |          OO      O O                                              |
   0.13 OOOOOO OOOO         O OO   OOOOOOOO                                 |
  0.125 ++    O                                                             |
        |                                                                   |
   0.12 ++                                                                  |
  0.115 ++                                                                  |
   0.11 ++                                                                  |
        |                                       *                           |
  0.105 ++        *                        ***** :                          |
    0.1 ********** **     *****************      *****************          |
        |          * *   *                                        ** *  *****
  0.095 ++            ***                                           * **    |
   0.09 ++------------------------------------------------------------------+


	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong Ye

View attachment "job.yaml" of type "text/plain" (3464 bytes)

View attachment "reproduce" of type "text/plain" (4566 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ