[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160804085409.GI19697@yexl-desktop>
Date: Thu, 4 Aug 2016 16:54:09 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Stephen Rothwell <sfr@...b.auug.org.au>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp] [shmem] 071904e8df: meminfo.AnonHugePages +553.5% increase
FYI, we noticed meminfo.AnonHugePages +553.5% increase due to commit:
commit 071904e8dfed9525f9da86523caf78b6da5f9e7e ("shmem: get_unmapped_area align huge page")
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
in testcase: vm-scalability
on test machine: 128 threads 4 Sockets Haswell-EP with 512G memory
with following parameters:
path_params: 300s-16G-shm-pread-rand-mt-performance
run:
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-6/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/16G/lkp-hsw-4ep1/shm-pread-rand-mt/vm-scalability
commit:
6a028aa3ca ("shmem: prepare huge= mount option and sysfs knob")
071904e8df ("shmem: get_unmapped_area align huge page")
6a028aa3ca32379e 071904e8dfed9525f9da86523c
---------------- --------------------------
%stddev %change %stddev
\ | \
20428 ± 9% +553.5% 133500 ± 7% meminfo.AnonHugePages
42717 ± 4% +261.3% 154340 ± 6% meminfo.AnonPages
2535349 ± 24% +54.8% 3925808 ± 19% vm-scalability.time.voluntary_context_switches
1212317 ± 14% +31.0% 1588359 ± 11% softirqs.SCHED
11490 ± 13% +35.0% 15517 ± 14% vmstat.system.cs
6813495 ± 17% +38.2% 9413129 ± 15% perf-stat.context-switches
266843 ± 19% +49.2% 398085 ± 18% perf-stat.cpu-migrations
0.62 ± 35% +77.6% 1.09 ± 18% perf-profile.cycles-pp.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.61 ± 35% +77.7% 1.08 ± 19% perf-profile.cycles-pp.shmem_fault.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.51 ± 51% +92.0% 0.98 ± 19% perf-profile.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.handle_mm_fault.__do_page_fault
36974 ± 18% +38.5% 51200 ± 16% sched_debug.cpu.nr_switches.avg
38184 ± 18% +37.4% 52448 ± 16% sched_debug.cpu.sched_count.avg
18934 ± 19% +40.6% 26614 ± 16% sched_debug.cpu.ttwu_count.avg
2788 ± 34% +225.2% 9067 ± 15% numa-vmstat.node0.nr_anon_pages
2570 ± 37% +313.9% 10639 ± 12% numa-vmstat.node1.nr_anon_pages
2719 ± 40% +237.0% 9162 ± 8% numa-vmstat.node2.nr_anon_pages
2589 ± 55% +252.8% 9135 ± 18% numa-vmstat.node3.nr_anon_pages
75541854 ± 25% +70.2% 1.286e+08 ± 18% cpuidle.C1-HSW.time
1892699 ± 24% +57.3% 2976810 ± 18% cpuidle.C1-HSW.usage
19337873 ± 18% +51.3% 29267473 ± 15% cpuidle.C1E-HSW.time
145279 ± 18% +62.8% 236549 ± 15% cpuidle.C1E-HSW.usage
31668 ± 25% +68.3% 53295 ± 20% cpuidle.POLL.usage
10677 ± 4% +259.7% 38406 ± 6% proc-vmstat.nr_anon_pages
77.40 ±214% +69851.2% 54142 ± 2% proc-vmstat.numa_huge_pte_updates
3725 ± 39% +19571.4% 732918 ± 22% proc-vmstat.numa_pages_migrated
106473 ± 79% +25945.5% 27731613 ± 2% proc-vmstat.numa_pte_updates
3725 ± 39% +19571.4% 732918 ± 22% proc-vmstat.pgmigrate_success
13.10 ± 25% +11712.2% 1547 ± 20% proc-vmstat.thp_deferred_split_page
0.00 ± -1% +Inf% 33156 ± 81% latency_stats.avg.wait_on_page_bit.do_huge_pmd_numa_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3686 ± 14% +678.3% 28688 ± 29% latency_stats.max.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
951.90 ± 11% +1921.0% 19238 ± 31% latency_stats.max.call_rwsem_down_write_failed_killable.SyS_mprotect.entry_SYSCALL_64_fastpath
1135 ± 16% +2155.5% 25613 ± 50% latency_stats.max.call_rwsem_down_write_failed_killable.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 35138 ± 80% latency_stats.max.wait_on_page_bit.do_huge_pmd_numa_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
18.20 ±300% +1.4e+05% 25519 ± 21% latency_stats.sum.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.do_huge_pmd_numa_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
39813 ± 20% -85.5% 5783 ± 31% latency_stats.sum.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 37259 ± 82% latency_stats.sum.wait_on_page_bit.do_huge_pmd_numa_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
5098 ± 62% +523.6% 31795 ± 16% numa-meminfo.node0.AnonHugePages
11172 ± 34% +228.1% 36658 ± 15% numa-meminfo.node0.AnonPages
4400 ± 80% +741.6% 37033 ± 11% numa-meminfo.node1.AnonHugePages
10286 ± 37% +317.6% 42959 ± 12% numa-meminfo.node1.AnonPages
5716 ± 74% +462.2% 32141 ± 9% numa-meminfo.node2.AnonHugePages
10885 ± 40% +240.3% 37038 ± 8% numa-meminfo.node2.AnonPages
5200 ± 97% +511.2% 31788 ± 18% numa-meminfo.node3.AnonHugePages
10360 ± 55% +256.6% 36942 ± 18% numa-meminfo.node3.AnonPages
meminfo.AnonPages
180000 ++------------------------O----O-----------------------------------+
| O O OOO O O O OO O O |
160000 O+O OO OOOO OOOO OOOO OO O O O OO |
140000 +O O O O O |
| O O |
120000 ++ |
100000 ++ |
| |
80000 ++ |
60000 ++ |
| * * * |
40000 ++ *************** ************** ** ****** *********** ******
20000 **** : : *** **** |
| : |
0 ++-----------------*-----------------------------------------------+
meminfo.AnonHugePages
160000 ++------------------------O----O-----------------------------------+
| O O O OO O |
140000 O+OO O OO O O O OOOO O O O OO |
120000 ++ O O O O O O OO OO O O O |
|O O O O O |
100000 ++ |
| |
80000 ++ |
| |
60000 ++ |
40000 ++ |
| |
20000 ++ ***********************
******************* ************************* |
0 ++-----------------*-----------------------------------------------+
proc-vmstat.nr_anon_pages
45000 ++------------------------O----O------------------------------------+
| O O O OO OO O OO O O |
40000 O+O OO OOO OOOO OOO O OO O OO O OO |
35000 +O O O O O |
| O O |
30000 ++ |
25000 ++ |
| |
20000 ++ |
15000 ++ |
| * * * |
10000 ++ *************** ************** ** ***** * *********** ******
5000 **** : : **** **** |
| : |
0 ++-----------------*------------------------------------------------+
proc-vmstat.numa_pte_updates
3e+07 ++-----O----O----O-O----------------------------------------------+
OOOOOOO OOOO OOOOOO OOOOO OOOOOOOO OOOOO |
2.5e+07 ++ O O O O |
| |
| |
2e+07 ++ |
| |
1.5e+07 ++ |
| |
1e+07 ++ |
| |
| |
5e+06 ++ |
| |
0 *******************************************************************
proc-vmstat.numa_huge_pte_updates
60000 ++------------------------------------------------------------------+
OOOOO OOOOOOOOOOOOO O O OOOO OOOOOOO O OOO |
50000 ++ O O O O O O OO O |
| |
| |
40000 ++ |
| |
30000 ++ |
| |
20000 ++ |
| |
| |
10000 ++ |
| |
0 *********************************************************************
proc-vmstat.numa_pages_migrated
1.2e+06 ++----------------------------------------------------------------+
| O O |
1e+06 ++ O O O O |
|O O O O O OO |
| OO O O |
800000 O+O OO O O O OO O |
| O O O O O OO OO O |
600000 ++ O O O |
| O O O O |
400000 ++ |
| O |
| |
200000 ++ |
| * |
0 *******************************************************************
proc-vmstat.pgmigrate_success
1.2e+06 ++----------------------------------------------------------------+
| O O |
1e+06 ++ O O O O |
|O O O O O OO |
| OO O O |
800000 O+O OO O O O OO O |
| O O O O O OO OO O |
600000 ++ O O O |
| O O O O |
400000 ++ |
| O |
| |
200000 ++ |
| * |
0 *******************************************************************
proc-vmstat.thp_deferred_split_page
2500 ++-------------------------------------------------------------------+
| O O |
| O O O O |
2000 +O O O O O OO |
| OO O O O O |
O O OO OO OO O |
1500 ++ O O O O OO O OO |
| OO O |
1000 ++ O O O |
| O |
| |
500 ++ |
| |
| * |
0 ******-***************************************************************
numa-meminfo.node0.AnonHugePages
45000 ++------------------------------------------------------------------+
| O |
40000 ++ OO O |
35000 ++ O O O O O |
O OO O O O O OO OO O O |
30000 ++ OOOOOO OOO O OO O O |
25000 +O O O O O OO |
| O |
20000 ++ |
15000 ++ * |
| : |
10000 ++ : ** *
5000 +* * * * * * * * * : : : : * |
|:* ::**** * :: * *** * :* * ::* :: :*:: ***: :****** *** ** *|
0 *+-*-*---**-****-****-*---*--*-*--*--*-**--***---*-*------*---------+
numa-meminfo.node2.AnonHugePages
50000 ++------------------------------------------------------------------+
45000 ++ O O |
O O O O |
40000 ++OO O O OO O O |
35000 +O O O O O OO O O |
| O OOOO O OO O O O OOO O |
30000 ++ O O O O O OO |
25000 ++ |
20000 ++ |
| |
15000 ++ *|
10000 ++ * * * * * * :|
| ** * ** * * : * * * :*** :** :: : ::* * *
5000 ++*** **** * **** :::* :****** * *** : ** : * * ** :** * * |
0 **-------*---------*-*--**--------**----**---*-------------**-------+
[*] bisect-good sample
[O] bisect-bad sample
Thanks,
Xiaolong
View attachment "config-4.7.0-rc7-00246-g071904e" of type "text/plain" (151329 bytes)
View attachment "job.yaml" of type "text/plain" (3893 bytes)
View attachment "reproduce" of type "text/plain" (930 bytes)
Powered by blists - more mailing lists