lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Thu, 26 Feb 2015 11:27:20 +0800
From:	Huang Ying <ying.huang@...ux.intel.com>
To:	Mel Gorman <mgorman@...e.de>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: [LKP] [mm] 4d942466994: +4.8% will-it-scale.per_process_ops

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 4d9424669946532be754a6e116618dcb58430cb4 ("mm: convert p[te|md]_mknonnuma and remaining page table manipulations")


testbox/testcase/testparams: lkp-g5/will-it-scale/performance-readseek3

842915f56667f9ee  4d9424669946532be754a6e116  
----------------  --------------------------  
         %stddev     %change         %stddev
             \          |                \  
   1526912 ±  1%      +4.8%    1599490 ±  0%  will-it-scale.per_process_ops
     15364 ±  4%     +10.1%      16920 ±  5%  will-it-scale.time.involuntary_context_switches
    131290 ±  0%      +4.8%     137653 ±  0%  will-it-scale.time.minor_page_faults
       318 ±  1%      -1.6%        313 ±  0%  will-it-scale.time.elapsed_time.max
       318 ±  1%      -1.6%        313 ±  0%  will-it-scale.time.elapsed_time
        54 ± 25%     -83.8%          8 ± 48%  sched_debug.cfs_rq[53]:/.tg_load_contrib
        11 ± 39%    +316.9%         47 ± 22%  sched_debug.cfs_rq[18]:/.tg_load_contrib
         8 ± 24%     -79.0%          1 ± 47%  sched_debug.cfs_rq[11]:/.nr_spread_over
        46 ±  7%    +271.8%        172 ± 45%  numa-vmstat.node2.nr_inactive_anon
       186 ±  7%    +270.8%        691 ± 44%  numa-meminfo.node2.Inactive(anon)
       642 ± 15%    +116.3%       1388 ± 41%  sched_debug.cpu#2.ttwu_count
        37 ± 17%    +170.3%        100 ± 15%  numa-vmstat.node6.nr_page_table_pages
        19 ± 41%     +98.7%         37 ± 38%  sched_debug.cfs_rq[62]:/.blocked_load_avg
       347 ± 19%    +103.9%        708 ± 49%  sched_debug.cpu#6.sched_goidle
       215 ±  5%    +148.8%        535 ± 31%  sched_debug.cpu#101.sched_goidle
       519 ±  7%    +135.5%       1222 ± 30%  sched_debug.cpu#101.nr_switches
        23 ± 35%     +84.3%         43 ± 33%  sched_debug.cfs_rq[62]:/.tg_load_contrib
       833 ± 18%     +89.1%       1576 ± 47%  sched_debug.cpu#6.nr_switches
   1083043 ±  0%     -55.2%     485505 ±  1%  proc-vmstat.numa_pte_updates
       250 ± 28%    +120.1%        551 ± 41%  sched_debug.cpu#74.ttwu_local
       865 ± 18%     +61.1%       1394 ± 29%  sched_debug.cpu#32.ttwu_local
       515 ± 18%     +81.0%        932 ± 41%  sched_debug.cpu#2.sched_goidle
      1242 ± 22%     +76.4%       2191 ± 39%  sched_debug.cpu#2.nr_switches
        23 ± 17%     +63.9%         38 ± 19%  sched_debug.cfs_rq[96]:/.tg_load_contrib
       302 ±  0%     +88.9%        570 ± 31%  sched_debug.cpu#47.ttwu_local
      3126 ±  6%     +77.9%       5562 ± 36%  sched_debug.cpu#15.sched_count
      1717 ± 30%     +30.5%       2240 ± 21%  sched_debug.cpu#32.ttwu_count
       256 ± 14%     +87.1%        479 ± 22%  sched_debug.cpu#7.ttwu_count
      4200 ± 30%     -55.3%       1878 ± 30%  sched_debug.cpu#67.ttwu_count
      1036 ± 10%     -50.7%        511 ± 35%  sched_debug.cpu#79.ttwu_count
       527 ± 18%     +72.5%        909 ± 34%  sched_debug.cpu#15.ttwu_local
       359 ±  7%     +64.1%        589 ± 17%  sched_debug.cpu#50.ttwu_count
      1692 ± 16%     +34.2%       2271 ± 21%  sched_debug.cpu#32.sched_goidle
       653 ± 14%     -51.7%        315 ± 37%  sched_debug.cpu#79.ttwu_local
       137 ±  3%     +99.1%        272 ± 38%  sched_debug.cpu#40.ttwu_local
      6398 ± 24%     +55.9%       9973 ± 16%  numa-meminfo.node3.SReclaimable
      1599 ± 24%     +55.9%       2493 ± 16%  numa-vmstat.node3.nr_slab_reclaimable
      3669 ± 18%     +33.6%       4900 ± 16%  sched_debug.cpu#32.nr_switches
      3681 ± 18%     +33.4%       4910 ± 16%  sched_debug.cpu#32.sched_count
       546 ±  9%     +73.0%        945 ± 24%  sched_debug.cpu#40.nr_switches
       965 ± 26%     -38.8%        590 ±  4%  sched_debug.cpu#64.sched_goidle
       569 ±  8%     +69.2%        963 ± 23%  sched_debug.cpu#40.sched_count
       225 ±  7%     +79.0%        402 ± 25%  sched_debug.cpu#40.sched_goidle
       199 ±  4%     +53.6%        306 ± 17%  sched_debug.cpu#103.sched_goidle
       112 ±  3%     +44.9%        162 ± 20%  sched_debug.cpu#102.ttwu_local
       198 ± 15%     +71.8%        340 ± 39%  sched_debug.cpu#40.ttwu_count
       475 ±  2%     +51.8%        721 ± 17%  sched_debug.cpu#103.nr_switches
       162 ±  9%     +86.6%        302 ± 25%  sched_debug.cpu#7.ttwu_local
       203 ±  1%     +54.7%        314 ± 15%  sched_debug.cpu#102.sched_goidle
       499 ±  7%     +68.9%        843 ± 35%  sched_debug.cpu#47.ttwu_count
       478 ±  2%     +54.4%        739 ± 16%  sched_debug.cpu#102.nr_switches
       767 ± 16%     +66.6%       1278 ± 25%  sched_debug.cpu#7.nr_switches
      4369 ± 15%     -41.3%       2564 ±  4%  sched_debug.cpu#65.ttwu_count
       169 ± 14%     +32.2%        224 ± 21%  sched_debug.cpu#105.ttwu_local
       559 ±  6%     +40.3%        784 ± 17%  sched_debug.cpu#47.sched_goidle
       770 ±  7%     -28.2%        552 ± 21%  sched_debug.cpu#126.ttwu_local
      1195 ± 21%     +35.0%       1614 ± 16%  cpuidle.C1E-NHM.usage
      1450 ± 17%     +38.8%       2013 ±  8%  sched_debug.cpu#90.ttwu_local
       929 ±  7%     +41.3%       1312 ±  9%  sched_debug.cpu#97.ttwu_count
       506 ±  6%     +27.6%        646 ± 12%  sched_debug.cpu#48.sched_goidle
       914 ±  8%     +54.0%       1407 ± 26%  sched_debug.cpu#50.nr_switches
       401 ±  6%     +52.9%        613 ± 25%  sched_debug.cpu#50.sched_goidle
      1004 ±  9%     +18.8%       1193 ± 12%  slabinfo.xfs_buf.num_objs
      1004 ±  9%     +18.8%       1193 ± 12%  slabinfo.xfs_buf.active_objs
      1891 ±  3%     +26.8%       2397 ±  9%  sched_debug.cpu#120.nr_switches
      1380 ±  6%     -26.6%       1012 ± 10%  sched_debug.cpu#126.ttwu_count
     15695 ± 13%     -21.4%      12338 ±  6%  sched_debug.cfs_rq[113]:/.exec_clock
      1310 ±  5%     +40.7%       1844 ± 19%  sched_debug.cpu#47.nr_switches
      1330 ±  5%     +40.0%       1863 ± 19%  sched_debug.cpu#47.sched_count
      1286 ±  8%     +44.7%       1861 ± 35%  sched_debug.cpu#15.ttwu_count
        21 ± 10%     +17.7%         25 ± 12%  sched_debug.cpu#96.cpu_load[3]
     23482 ± 12%     +29.4%      30397 ±  4%  numa-meminfo.node3.Slab
       566 ± 21%     -30.7%        392 ±  8%  sched_debug.cpu#115.sched_goidle
        22 ± 11%     +17.5%         26 ± 13%  sched_debug.cpu#96.cpu_load[2]
         9 ±  5%     +20.5%         11 ± 11%  sched_debug.cpu#8.cpu_load[1]
         9 ±  5%     +20.5%         11 ± 11%  sched_debug.cfs_rq[8]:/.load
         9 ±  5%     +20.5%         11 ± 11%  sched_debug.cpu#5.load
         9 ±  5%     +20.5%         11 ± 11%  sched_debug.cpu#8.load
      3581 ±  5%     +22.1%       4372 ±  6%  sched_debug.cpu#26.curr->pid
       215 ±  6%     +37.9%        297 ± 14%  sched_debug.cpu#50.ttwu_local
      1331 ± 20%     -31.6%        911 ±  9%  sched_debug.cpu#115.nr_switches
       824 ±  3%     +21.2%        999 ± 14%  sched_debug.cpu#120.sched_goidle
      1339 ± 20%     -31.3%        921 ±  9%  sched_debug.cpu#115.sched_count
        21 ± 10%     +17.9%         24 ± 12%  sched_debug.cpu#96.cpu_load[4]
     42610 ± 11%     -19.9%      34115 ±  4%  sched_debug.cfs_rq[17]:/.exec_clock
      2693 ±  7%     +27.2%       3425 ± 10%  sched_debug.cpu#95.curr->pid
      2899 ± 18%     +26.8%       3677 ± 11%  sched_debug.cpu#90.ttwu_count
         9 ± 17%     -35.3%          6 ± 13%  sched_debug.cpu#20.load
         8 ±  5%     -19.2%          7 ± 10%  sched_debug.cfs_rq[45]:/.load
         9 ± 17%     -35.3%          6 ± 13%  sched_debug.cfs_rq[20]:/.load
         8 ±  5%     -19.2%          7 ± 10%  sched_debug.cpu#45.load
      5131 ± 11%     -20.9%       4061 ±  7%  sched_debug.cpu#59.curr->pid
    999035 ±  0%     -21.4%     785703 ±  1%  sched_debug.cpu#103.avg_idle
      1000 ± 14%     -33.2%        668 ± 32%  numa-meminfo.node7.PageTables
      1146 ± 11%     -20.9%        907 ± 11%  sched_debug.cpu#117.nr_switches
      1157 ± 11%     -20.9%        916 ± 10%  sched_debug.cpu#117.sched_count
    102779 ±  4%     +24.6%     128062 ±  8%  numa-numastat.node0.numa_hit
       506 ± 11%     -20.4%        403 ± 11%  sched_debug.cpu#117.sched_goidle
      1819 ± 10%     +12.1%       2039 ±  9%  sched_debug.cpu#94.ttwu_local
       222 ± 21%     +35.5%        300 ± 30%  sched_debug.cpu#49.ttwu_local
   1000000 ±  0%     -20.2%     797734 ±  2%  sched_debug.cpu#102.avg_idle
       464 ±  5%     +22.0%        567 ± 19%  sched_debug.cpu#123.ttwu_local
       892 ± 17%     -24.9%        669 ± 14%  sched_debug.cpu#59.sched_goidle
       863 ±  2%     +16.5%       1005 ±  6%  sched_debug.cpu#125.ttwu_count
       855 ±  6%     +20.6%       1031 ± 15%  sched_debug.cpu#124.ttwu_count
       790 ±  8%     +20.5%        952 ±  6%  slabinfo.blkdev_ioc.active_objs
       790 ±  8%     +20.5%        952 ±  6%  slabinfo.blkdev_ioc.num_objs
   1000000 ±  0%     -20.5%     795037 ±  1%  sched_debug.cpu#101.avg_idle
       521 ± 25%     -32.9%        349 ±  6%  sched_debug.cpu#116.ttwu_count
   1000000 ±  0%     -18.9%     811355 ±  2%  sched_debug.cpu#37.avg_idle
     99927 ±  5%     +25.0%     124916 ±  7%  numa-numastat.node0.local_node
    998080 ±  1%     -18.7%     811378 ±  3%  sched_debug.cpu#40.avg_idle
         9 ±  5%     +15.2%         10 ±  7%  sched_debug.cpu#5.cpu_load[0]
         9 ±  5%     +15.2%         10 ±  7%  sched_debug.cpu#5.cpu_load[3]
    998768 ±  0%     -18.6%     813469 ±  2%  sched_debug.cpu#39.avg_idle
       513 ±  5%     -25.7%        381 ± 16%  sched_debug.cpu#115.ttwu_count
      4271 ±  8%     +19.5%       5105 ±  8%  numa-vmstat.node3.nr_slab_unreclaimable
     17083 ±  8%     +19.6%      20423 ±  8%  numa-meminfo.node3.SUnreclaim
    991551 ±  1%     -17.3%     819931 ±  3%  sched_debug.cpu#38.avg_idle
      4521 ±  7%     -18.5%       3684 ± 10%  sched_debug.cpu#106.curr->pid
    998706 ±  0%     -17.7%     821652 ±  3%  sched_debug.cpu#36.avg_idle
    996941 ±  0%     -16.8%     829747 ±  2%  sched_debug.cpu#33.avg_idle
       339 ± 10%     -22.4%        263 ± 13%  sched_debug.cpu#115.ttwu_local
    999527 ±  0%     -18.7%     812806 ±  4%  sched_debug.cpu#99.avg_idle
    996295 ±  0%     -16.5%     831915 ±  2%  sched_debug.cpu#35.avg_idle
    997889 ±  0%     -19.6%     802292 ±  3%  sched_debug.cpu#100.avg_idle
      3462 ±  2%     -20.4%       2757 ± 11%  sched_debug.cpu#127.curr->pid
      3207 ± 10%     +20.2%       3855 ±  4%  sched_debug.cpu#74.curr->pid
    470490 ±  4%     -10.6%     420446 ±  5%  numa-numastat.node7.local_node
     21800 ± 49%     -40.6%      12939 ±  3%  numa-meminfo.node4.Active
    474714 ±  4%     -10.6%     424629 ±  5%  numa-numastat.node7.numa_hit
    998931 ±  0%     -16.7%     832405 ±  2%  sched_debug.cpu#34.avg_idle
   1000000 ±  0%     -15.2%     848462 ±  0%  sched_debug.cpu#96.avg_idle
       177 ±  9%     +19.4%        211 ±  5%  sched_debug.cfs_rq[95]:/.tg_runnable_contrib
   1018480 ±  2%     -15.3%     862826 ±  1%  sched_debug.cpu#41.avg_idle
     10768 ±  5%     -20.5%       8559 ±  8%  sched_debug.cfs_rq[127]:/.avg->runnable_avg_sum
       234 ±  6%     -20.7%        185 ±  8%  sched_debug.cfs_rq[127]:/.tg_runnable_contrib
      8154 ±  9%     +18.9%       9698 ±  5%  sched_debug.cfs_rq[95]:/.avg->runnable_avg_sum
    999017 ±  0%     -13.5%     864522 ±  1%  sched_debug.cpu#45.avg_idle
    999665 ±  0%     -14.0%     859582 ±  1%  sched_debug.cpu#44.avg_idle
    996108 ±  0%     -15.6%     840514 ±  4%  sched_debug.cpu#98.avg_idle
    994758 ±  0%     -15.1%     844877 ±  3%  sched_debug.cpu#48.avg_idle
       467 ±  3%     -15.1%        397 ±  9%  sched_debug.cpu#116.sched_goidle
    997346 ±  0%     -13.7%     861125 ±  1%  sched_debug.cpu#42.avg_idle
   1000000 ±  0%     -14.2%     857602 ±  2%  sched_debug.cpu#47.avg_idle
   1000000 ±  0%     -13.8%     862083 ±  1%  sched_debug.cpu#46.avg_idle
   1000000 ±  0%     -13.5%     864694 ±  1%  sched_debug.cpu#43.avg_idle
      2923 ±  0%     +17.9%       3447 ±  4%  sched_debug.cpu#82.curr->pid
       839 ±  0%     +19.0%        998 ± 17%  sched_debug.cpu#124.sched_goidle
       169 ±  5%     +10.2%        186 ±  7%  sched_debug.cfs_rq[90]:/.tg_runnable_contrib
      3870 ± 14%     +17.3%       4538 ±  5%  sched_debug.cpu#27.curr->pid
      7791 ±  5%     +10.0%       8568 ±  7%  sched_debug.cfs_rq[90]:/.avg->runnable_avg_sum
       232 ±  5%     +14.1%        265 ±  4%  sched_debug.cfs_rq[74]:/.tg_runnable_contrib
      2530 ±  3%     +13.7%       2877 ±  6%  numa-vmstat.node5.nr_active_file
     10124 ±  3%     +13.7%      11510 ±  6%  numa-meminfo.node5.Active(file)
      3971 ±  5%     -17.6%       3271 ± 16%  sched_debug.cpu#114.curr->pid
     10699 ±  5%     +14.2%      12218 ±  4%  sched_debug.cfs_rq[74]:/.avg->runnable_avg_sum
    998715 ±  0%     -12.4%     874974 ±  1%  sched_debug.cpu#53.avg_idle
    998753 ±  0%     -12.5%     873481 ±  0%  sched_debug.cpu#55.avg_idle
    995163 ±  0%     -11.8%     878062 ±  1%  sched_debug.cpu#50.avg_idle
    997692 ±  0%     -12.6%     872339 ±  1%  sched_debug.cpu#56.avg_idle
      1072 ±  4%     -15.6%        904 ± 10%  sched_debug.cpu#116.nr_switches
    996316 ±  0%     -12.0%     876849 ±  1%  sched_debug.cpu#52.avg_idle
    997489 ±  0%     -11.9%     878444 ±  1%  sched_debug.cpu#54.avg_idle
         9 ±  5%     -14.3%          8 ±  0%  sched_debug.cpu#9.cpu_load[3]
         9 ±  5%     -14.3%          8 ±  0%  sched_debug.cpu#9.cpu_load[2]
    996313 ±  0%     -12.1%     875903 ±  2%  sched_debug.cpu#49.avg_idle
     15364 ±  4%     +10.1%      16920 ±  5%  time.involuntary_context_switches
      4429 ±  7%     -16.8%       3685 ± 10%  sched_debug.cpu#107.curr->pid
    997203 ±  0%     -11.2%     885706 ±  1%  sched_debug.cpu#64.avg_idle
       737 ±  3%     +16.7%        860 ±  5%  time.voluntary_context_switches
       283 ±  9%     +15.9%        328 ±  3%  sched_debug.cfs_rq[26]:/.tg_runnable_contrib
      2554 ±  1%      +7.6%       2748 ±  5%  numa-vmstat.node3.nr_active_file
     10220 ±  1%      +7.6%      10994 ±  5%  numa-meminfo.node3.Active(file)
     13013 ±  9%     +15.9%      15078 ±  3%  sched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
      3896 ±  6%      +6.6%       4155 ±  4%  sched_debug.cpu#29.curr->pid
    999677 ±  0%     -10.9%     891066 ±  0%  sched_debug.cpu#57.avg_idle
      2054 ±  2%     +11.6%       2292 ±  7%  sched_debug.cpu#125.nr_switches
    160853 ±  4%     +14.8%     184682 ±  6%  numa-numastat.node1.local_node
    996830 ±  0%     -11.8%     878821 ±  2%  sched_debug.cpu#97.avg_idle
    999553 ±  0%     -10.4%     895133 ±  0%  sched_debug.cpu#61.avg_idle
    999693 ±  0%     -10.9%     890366 ±  1%  sched_debug.cpu#58.avg_idle
      4686 ±  6%     -12.8%       4085 ±  6%  sched_debug.cpu#58.curr->pid
    997785 ±  0%     -10.9%     889088 ±  1%  sched_debug.cpu#60.avg_idle
    999123 ±  0%     -10.5%     893883 ±  0%  sched_debug.cpu#62.avg_idle
    989119 ±  0%     -10.7%     883255 ±  1%  sched_debug.cpu#51.avg_idle
     12980 ±  8%      +7.5%      13948 ±  4%  sched_debug.cfs_rq[29]:/.avg->runnable_avg_sum
    165070 ±  4%     +14.4%     188856 ±  6%  numa-numastat.node1.numa_hit
    999487 ±  0%     -10.5%     894943 ±  0%  sched_debug.cpu#63.avg_idle
        10 ±  0%     -15.0%          8 ±  5%  sched_debug.cpu#9.cpu_load[1]
       270 ±  8%     -13.4%        234 ±  7%  sched_debug.cpu#117.ttwu_local
    999055 ±  0%      -9.4%     904983 ±  0%  sched_debug.cpu#109.avg_idle
   1000000 ±  0%      -9.2%     907540 ±  0%  sched_debug.cpu#104.avg_idle
   1000000 ±  0%      -9.8%     902218 ±  0%  sched_debug.cpu#111.avg_idle

lkp-g5: Westmere-EX
Memory: 2048G

To reproduce:

	apt-get install ruby
	git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
	cd lkp-tests
	bin/setup-local job.yaml # the job file attached in this email
	bin/run-local   job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang


View attachment "job.yaml" of type "text/plain" (1545 bytes)

View attachment "reproduce" of type "text/plain" (9552 bytes)

_______________________________________________
LKP mailing list
LKP@...ux.intel.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ