[<prev] [next>] [day] [month] [year] [list]
Message-ID: <87io81bm8z.fsf@yhuang-dev.intel.com>
Date: Thu, 27 Aug 2015 08:36:44 +0800
From: kernel test robot <ying.huang@...el.com>
TO: Andrea Arcangeli <aarcange@...hat.com>
CC: LKML <linux-kernel@...r.kernel.org>
Subject: [lkp] [mm] 1732e8d84b: -6.4% will-it-scale.per_process_ops
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git master
commit 1732e8d84b5fa950d6533284dcc63dfee93e39f1 ("mm: gup: make get_user_pages_fast and __get_user_pages_fast latency conscious")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
lkp-sb03/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/futex1
commit:
7201f254ee6b51b1549ecec1cffe4d59b3e53935
1732e8d84b5fa950d6533284dcc63dfee93e39f1
7201f254ee6b51b1 1732e8d84b5fa950d6533284dc
---------------- --------------------------
%stddev %change %stddev
\ | \
5363330 ± 0% -6.4% 5021398 ± 0% will-it-scale.per_process_ops
1810294 ± 0% -5.0% 1720041 ± 0% will-it-scale.per_thread_ops
209850 ± 5% +13.0% 237196 ± 4% cpuidle.C1-SNB.usage
53.75 ± 62% -100.9% -0.50 ±-1300% numa-numastat.node0.other_node
0.00 ± -1% +Inf% 1.11 ± 3% perf-profile.cpu-cycles.___might_sleep.get_user_pages_fast.get_futex_key.futex_wake.do_futex
2791 ± 2% +15.8% 3231 ± 1% vmstat.system.cs
290504 ± 2% +16.8% 339313 ± 1% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
4203979 ± 0% +5.3% 4424982 ± 2% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
2941 ± 15% -23.0% 2266 ± 11% numa-vmstat.node0.nr_anon_pages
3377 ± 13% +19.7% 4040 ± 6% numa-vmstat.node1.nr_anon_pages
2090 ± 27% +49.7% 3129 ± 0% numa-vmstat.node1.nr_mapped
11769 ± 15% -22.9% 9070 ± 11% numa-meminfo.node0.AnonPages
13509 ± 13% +19.7% 16167 ± 6% numa-meminfo.node1.AnonPages
2457 ± 1% +8.9% 2675 ± 3% numa-meminfo.node1.KernelStack
8363 ± 27% +49.7% 12519 ± 0% numa-meminfo.node1.Mapped
11075 ± 24% +95.4% 21637 ± 17% sched_debug.cfs_rq[24]:/.avg->runnable_avg_sum
37436 ± 10% +29.7% 48571 ± 11% sched_debug.cfs_rq[24]:/.exec_clock
854911 ± 11% +25.9% 1076265 ± 9% sched_debug.cfs_rq[24]:/.min_vruntime
241.25 ± 24% +95.4% 471.50 ± 16% sched_debug.cfs_rq[24]:/.tg_runnable_contrib
12705 ± 27% -32.3% 8597 ± 2% sched_debug.cfs_rq[25]:/.avg->runnable_avg_sum
40588 ± 13% -13.7% 35043 ± 0% sched_debug.cfs_rq[25]:/.exec_clock
276.00 ± 27% -32.3% 186.75 ± 2% sched_debug.cfs_rq[25]:/.tg_runnable_contrib
631.75 ±101% -71.4% 180.75 ±164% sched_debug.cfs_rq[3]:/.blocked_load_avg
689.75 ± 96% -68.1% 220.00 ±134% sched_debug.cfs_rq[3]:/.tg_load_contrib
4.50 ± 57% -94.4% 0.25 ±173% sched_debug.cfs_rq[6]:/.nr_spread_over
23334 ± 10% -41.3% 13699 ± 18% sched_debug.cfs_rq[8]:/.avg->runnable_avg_sum
508.50 ± 10% -41.6% 296.75 ± 18% sched_debug.cfs_rq[8]:/.tg_runnable_contrib
36.75 ± 1% +15.6% 42.50 ± 11% sched_debug.cpu#1.cpu_load[0]
7238 ± 31% +167.1% 19330 ± 16% sched_debug.cpu#1.nr_switches
8723 ± 23% +147.9% 21622 ± 10% sched_debug.cpu#1.sched_count
3283 ± 33% +179.5% 9179 ± 18% sched_debug.cpu#1.sched_goidle
31624 ± 30% -44.4% 17585 ± 12% sched_debug.cpu#11.sched_count
1170 ± 32% +84.8% 2163 ± 27% sched_debug.cpu#18.sched_goidle
8659 ± 35% +158.8% 22411 ± 18% sched_debug.cpu#2.nr_switches
10200 ± 44% +200.8% 30686 ± 38% sched_debug.cpu#2.sched_count
4063 ± 38% +167.8% 10881 ± 18% sched_debug.cpu#2.sched_goidle
2.00 ±154% +37.5% 2.75 ± 39% sched_debug.cpu#21.nr_uninterruptible
57326 ± 6% +16.9% 67039 ± 7% sched_debug.cpu#24.nr_load_updates
62286 ± 4% -8.8% 56812 ± 4% sched_debug.cpu#25.nr_load_updates
5467 ± 46% +315.6% 22723 ± 49% sched_debug.cpu#26.nr_switches
5588 ± 45% +312.3% 23036 ± 48% sched_debug.cpu#26.sched_count
2274 ± 57% +357.8% 10412 ± 57% sched_debug.cpu#26.sched_goidle
2440 ± 29% -58.6% 1011 ± 20% sched_debug.cpu#27.ttwu_local
39.25 ± 1% +7.0% 42.00 ± 2% sched_debug.cpu#3.cpu_load[3]
4907 ± 50% +215.8% 15497 ± 49% sched_debug.cpu#5.nr_switches
2213 ± 53% +239.4% 7512 ± 51% sched_debug.cpu#5.sched_goidle
370.00 ± 13% +97.0% 728.75 ± 23% sched_debug.cpu#6.ttwu_local
882811 ± 2% +12.7% 995138 ± 0% sched_debug.cpu#9.avg_idle
lkp-sb03: Sandy Bridge-EP
Memory: 64G
will-it-scale.per_process_ops
5.4e+06 ++---------------------------------------------------------------+
*..*...*..*.. .*..*..*...*.. .*...*..*..*
5.35e+06 ++ *..*...*..*..*..*...*. *..*. |
5.3e+06 ++ |
| |
5.25e+06 ++ |
5.2e+06 ++ |
| |
5.15e+06 ++ |
5.1e+06 ++ |
| O O O O |
5.05e+06 ++ O O O O O |
5e+06 ++ O O O O O O O O O O
O O |
4.95e+06 ++---------------------------------------------------------------+
will-it-scale.per_thread_ops
1.82e+06 ++---------------------------------------------------------*-----+
| .*.. ..*.. .*. *..*
1.8e+06 ++ ..*..*.. .*..*.. *...*. *..*. .*. |
*..*. *.. .. .. *. |
| * * |
1.78e+06 ++ |
| |
1.76e+06 ++ |
| |
1.74e+06 ++ |
| O O O O O O
O O O O O O O |
1.72e+06 ++ O O O O O |
| O O |
1.7e+06 ++-----------------------------------------------------O---------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3270 bytes)
View attachment "reproduce" of type "text/plain" (42 bytes)
Powered by blists - more mailing lists