[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1422949504.13951.5.camel@linux.intel.com>
Date: Tue, 03 Feb 2015 15:45:04 +0800
From: Huang Ying <ying.huang@...ux.intel.com>
To: Will Deacon <will.deacon@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: [LKP] [mm] 721c21c17ab: +11.7% will-it-scale.per_thread_ops
FYI, we noticed the below changes on
commit 721c21c17ab958abf19a8fc611c3bd4743680e38 ("mm: mmu_gather: use tlb->end != 0 only for TLB invalidation")
testbox/testcase/testparams: nhm4/will-it-scale/performance-readseek1
v3.19-rc4 721c21c17ab958abf19a8fc611
---------------- --------------------------
%stddev %change %stddev
\ | \
0.56 ± 1% +5.2% 0.59 ± 1% will-it-scale.scalability
1807741 ± 0% +2.3% 1848641 ± 0% will-it-scale.per_thread_ops
740 ± 30% +40.9% 1043 ± 26% sched_debug.cpu#4.ttwu_local
1335 ± 20% +23.7% 1651 ± 17% sched_debug.cpu#4.ttwu_count
506 ± 9% +40.8% 712 ± 1% cpuidle.C1-NHM.usage
120 ± 9% +33.1% 160 ± 11% sched_debug.cpu#7.load
120 ± 9% +26.2% 151 ± 10% sched_debug.cfs_rq[7]:/.load
90 ± 5% -16.2% 75 ± 16% sched_debug.cpu#6.cpu_load[4]
96 ± 7% +16.7% 112 ± 10% sched_debug.cfs_rq[2]:/.runnable_load_avg
testbox/testcase/testparams: nhm4/will-it-scale/performance-pread2
v3.19-rc4 721c21c17ab958abf19a8fc611
---------------- --------------------------
900692 ± 1% +11.7% 1005724 ± 0% will-it-scale.per_thread_ops
28033529 ± 0% -1.2% 27698665 ± 0% will-it-scale.time.voluntary_context_switches
671 ± 22% +40.4% 942 ± 27% sched_debug.cfs_rq[7]:/.blocked_load_avg
802 ± 19% +30.9% 1049 ± 25% sched_debug.cfs_rq[7]:/.tg_load_contrib
44840 ± 6% +15.6% 51846 ± 6% meminfo.DirectMap4k
18284 ± 1% -7.4% 16926 ± 2% vmstat.system.in
378463 ± 0% -1.2% 373746 ± 0% vmstat.system.cs
testbox/testcase/testparams: nhm4/will-it-scale/performance-readseek3
v3.19-rc4 721c21c17ab958abf19a8fc611
---------------- --------------------------
0.55 ± 0% +9.9% 0.60 ± 5% will-it-scale.scalability
1791707 ± 0% +2.9% 1843202 ± 0% will-it-scale.per_thread_ops
187 ± 41% +167.3% 501 ± 23% sched_debug.cfs_rq[0]:/.blocked_load_avg
281 ± 29% +121.3% 622 ± 18% sched_debug.cfs_rq[0]:/.tg_load_contrib
110 ± 9% +25.5% 138 ± 13% sched_debug.cfs_rq[5]:/.load
110 ± 9% +25.9% 138 ± 13% sched_debug.cpu#5.load
178 ± 6% -19.5% 144 ± 16% sched_debug.cpu#4.cpu_load[1]
94 ± 6% +12.9% 107 ± 8% sched_debug.cfs_rq[3]:/.runnable_load_avg
1.78 ± 7% +17.4% 2.09 ± 0% perf-profile.cpu-cycles.put_page.shmem_file_read_iter.new_sync_read.__vfs_read.vfs_read
187 ± 9% -19.1% 152 ± 16% sched_debug.cpu#4.cpu_load[2]
757 ± 5% +10.6% 838 ± 2% slabinfo.kmalloc-2048.active_objs
3064 ± 7% +7.8% 3302 ± 6% sched_debug.cpu#1.curr->pid
5.23 ± 2% +8.8% 5.69 ± 4% perf-profile.cpu-cycles.security_file_permission.rw_verify_area.vfs_read.sys_read.system_call_fastpath
3.23 ± 4% +8.0% 3.48 ± 5% perf-profile.cpu-cycles.copy_page_to_iter_iovec.copy_page_to_iter.shmem_file_read_iter.new_sync_read.__vfs_read
4216 ± 7% +7.5% 4531 ± 5% slabinfo.kmalloc-192.active_objs
nhm4: Nehalem
Memory: 4G
lkp-sbx04: Sandy Bridge-EX
Memory: 64G
will-it-scale.per_thread_ops
1.04e+06 ++---------------------------------------------------------------+
| O O O |
1.02e+06 ++ O O O O O O O O O |
1e+06 O+ O O O O O
| O O O |
980000 ++ |
| |
960000 ++ |
| |
940000 ++ |
920000 ++ |
*.. *..* |
900000 ++ + |
| *...*.. + |
880000 ++--------*------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Huang, Ying
View attachment "job.yaml" of type "text/plain" (1753 bytes)
View attachment "reproduce" of type "text/plain" (31 bytes)
_______________________________________________
LKP mailing list
LKP@...ux.intel.com
Powered by blists - more mailing lists