lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <53ABABDE.1010704@intel.com>
Date: Thu, 26 Jun 2014 13:13:02 +0800
From: Jet Chen <jet.chen@...el.com>
To: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
CC: Andrew Morton <akpm@...ux-foundation.org>, LKP <lkp@...org>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: [mempolicy] 5507231dd04: -18.2% vm-scalability.migrate_mbps
Hi Naoya,
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git am437x-starterkit
commit 5507231dd04d3d68796bafe83e6a20c985a0ef68 ("mempolicy: apply page table walker on queue_pages_range()")
test case: ivb44/vm-scalability/300s-migrate
8c81f3eeb336567 5507231dd04d3d68796bafe83
--------------- -------------------------
347258 ~ 0% -18.2% 284195 ~ 0% TOTAL vm-scalability.migrate_mbps
0.00 +Inf% 0.94 ~ 7% TOTAL perf-profile.cpu-cycles._raw_spin_lock.__walk_page_range.walk_page_range.queue_pages_range.migrate_to_node
11.49 ~ 1% -100.0% 0.00 ~ 0% TOTAL perf-profile.cpu-cycles.vm_normal_page.queue_pages_range.migrate_to_node.do_migrate_pages.SYSC_migrate_pages
69.40 ~ 0% -100.0% 0.00 ~ 0% TOTAL perf-profile.cpu-cycles.queue_pages_range.migrate_to_node.do_migrate_pages.SYSC_migrate_pages.sys_migrate_pages
3.68 ~ 3% -100.0% 0.00 ~ 0% TOTAL perf-profile.cpu-cycles.vm_normal_page.migrate_to_node.do_migrate_pages.SYSC_migrate_pages.sys_migrate_pages
0.00 +Inf% 4.51 ~ 2% TOTAL perf-profile.cpu-cycles.vm_normal_page.__walk_page_range.walk_page_range.queue_pages_range.migrate_to_node
0.00 +Inf% 8.36 ~ 1% TOTAL perf-profile.cpu-cycles.__walk_page_range.walk_page_range.queue_pages_range.migrate_to_node.do_migrate_pages
1.17 ~ 4% -100.0% 0.00 ~ 0% TOTAL perf-profile.cpu-cycles._raw_spin_lock.queue_pages_range.migrate_to_node.do_migrate_pages.SYSC_migrate_pages
0.00 +Inf% 9.30 ~ 2% TOTAL perf-profile.cpu-cycles.vm_normal_page.queue_pages_pte.__walk_page_range.walk_page_range.queue_pages_range
0.00 +Inf% 63.92 ~ 1% TOTAL perf-profile.cpu-cycles.queue_pages_pte.__walk_page_range.walk_page_range.queue_pages_range.migrate_to_node
61 ~32% +363.8% 286 ~10% TOTAL numa-vmstat.node0.nr_unevictable
257 ~30% +345.5% 1147 ~10% TOTAL numa-meminfo.node0.Unevictable
1133 ~ 8% +129.0% 2596 ~ 0% TOTAL meminfo.Unevictable
282 ~ 8% +129.1% 647 ~ 0% TOTAL proc-vmstat.nr_unevictable
93913 ~ 7% -49.8% 47172 ~ 3% TOTAL softirqs.RCU
113808 ~ 1% -45.4% 62087 ~ 0% TOTAL softirqs.SCHED
362197 ~ 0% -32.9% 243163 ~ 0% TOTAL cpuidle.C6-IVT.usage
1.49 ~ 3% -19.6% 1.20 ~ 4% TOTAL perf-profile.cpu-cycles.intel_idle.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
743815 ~ 2% -20.3% 592628 ~ 6% TOTAL proc-vmstat.pgmigrate_fail
310 ~ 6% +16.6% 362 ~ 8% TOTAL numa-vmstat.node1.nr_unevictable
1243 ~ 6% +16.5% 1448 ~ 8% TOTAL numa-meminfo.node1.Unevictable
1230 ~ 6% +16.6% 1434 ~ 8% TOTAL numa-meminfo.node1.Mlocked
307 ~ 6% +16.7% 358 ~ 8% TOTAL numa-vmstat.node1.nr_mlock
3943910 ~ 0% -12.3% 3459206 ~ 0% TOTAL proc-vmstat.pgfault
4402 ~ 3% -13.4% 3812 ~ 5% TOTAL numa-meminfo.node1.KernelStack
15303 ~ 7% -17.5% 12621 ~ 9% TOTAL slabinfo.kmalloc-192.num_objs
15301 ~ 7% -17.5% 12621 ~ 9% TOTAL slabinfo.kmalloc-192.active_objs
30438 ~ 0% +91.0% 58142 ~ 0% TOTAL time.involuntary_context_switches
162 ~ 3% +81.9% 296 ~ 0% TOTAL time.system_time
53 ~ 3% +81.1% 96 ~ 0% TOTAL time.percent_of_cpu_this_job_got
2586283 ~ 0% -18.5% 2107842 ~ 0% TOTAL time.minor_page_faults
48619 ~ 0% -18.1% 39800 ~ 0% TOTAL time.voluntary_context_switches
2037 ~ 0% -17.7% 1677 ~ 0% TOTAL vmstat.system.in
2206 ~ 0% -4.7% 2101 ~ 0% TOTAL vmstat.system.cs
~ 1% -3.6% ~ 1% TOTAL turbostat.Cor_W
~ 1% -2.2% ~ 1% TOTAL turbostat.Pkg_W
2.17 ~ 0% -1.4% 2.14 ~ 0% TOTAL turbostat.%c0
Legend:
~XX% - stddev percent
[+-]XX% - change percent
time.system_time
300 O+O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O-------------------------+
| |
280 ++ |
260 ++ |
| |
240 ++ |
| |
220 ++ |
| |
200 ++ |
180 ++ |
| .*. .* .*. |
160 *+*.. .*. .* *..*.*.*.. .*.*..*.*.*. + .*..*.*. .*.*.*. *.*..*.|
| * *. * * *. *
140 ++--------------------------------------------------------------------+
time.percent_of_cpu_this_job_got
100 ++--------------------------------------------------------------------+
95 O+O O O O O O O O O O O O O O O O O O |
| O |
90 ++ |
85 ++ |
| |
80 ++ |
75 ++ |
70 ++ |
| |
65 ++ |
60 ++ |
| .*. |
55 *+*.. .*. .*.*.*..*.*.*.. .*.*..*.*.*..*. .*..*.*. .*. .*. *.*..*.|
50 ++---*---*----------------*---------------*--------*----*-------------*
time.minor_page_faults
2.7e+06 ++----------------------------------------------------------------+
| .*. .* * |
2.6e+06 *+*.*.. .*. * *.*. + + + .*..*. .*.*.*..*.*. .*.*..*.*.|
| * *.*.*.. + * * * * *
2.5e+06 ++ * |
| |
2.4e+06 ++ |
| |
2.3e+06 ++ |
| |
2.2e+06 ++ |
| O |
2.1e+06 ++ O O O O O O |
O O O O O O O O O O O O |
2e+06 ++-----O----------------------------------------------------------+
time.voluntary_context_switches
50000 ++------------------------------------------------------------------+
*. .*. .*. .*..*.*. .*.. .*.*..*. .*..*.*.*. .*. .*. .*.|
48000 ++*. * *..*.*.* * * *.* *. * *. *
| |
| |
46000 ++ |
| |
44000 ++ |
| |
42000 ++ |
| |
| |
40000 ++ O O O O O |
| O O O O O O O O O |
38000 O+O----O-O------O---O-----------------------------------------------+
time.involuntary_context_switches
60000 ++------------------------------------------------------------------+
O O O O O O O O O O O O O O |
55000 ++ O O O O O O |
| |
| |
50000 ++ |
| |
45000 ++ |
| |
40000 ++ |
| |
| |
35000 ++ |
| .*..*.*. |
30000 *+*--*-*-*-*--*-*-*--------*-*--*-*-*--*-*-*-*--*-*-*-*--*-*-*-*--*-*
vm-scalability.migrate_mbps
360000 ++-----------------------------------------------------------------+
| .*.. .*.*..*. .* *. .*. .*.. |
350000 *+* *.*.*.*..*.* * + .. * *.*..*.*.* *.*.*.*..*.*.|
340000 ++ * *
| |
330000 ++ |
320000 ++ |
| |
310000 ++ |
300000 ++ |
| |
290000 ++ |
280000 ++ O O O O O O O |
O O O O O O O O O O O O |
270000 ++-----O-----------------------------------------------------------+
vmstat.system.in
2100 ++-------------------------------------------------------------------+
| .*..*.*.*.. .*.*..*.*
2000 ++ * *.* |
| + |
1900 *+*..*. .*.. .*. .*. .*..*.*.*..*.*.*.*..* |
| * *.* *. * |
1800 ++ |
| |
1700 ++ O O O O O |
| O O O O O O O O |
1600 ++ |
| |
1500 O+ O O |
| O O O O |
1400 ++-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Jet
View attachment "reproduce" of type "text/plain" (4064 bytes)