[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1423987608.5538.18.camel@intel.com>
Date: Sun, 15 Feb 2015 16:06:48 +0800
From: Huang Ying <ying.huang@...el.com>
To: Andy Lutomirski <luto@...capital.net>
Cc: LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: [LKP] [x86_64, entry] 2a23c6b8a9c: +3.5%
aim9.creat-clo.ops_per_sec, -64.5% aim9.time.user_time
FYI, we noticed the below changes on
commit 2a23c6b8a9c42620182a2d2cfc7c16f6ff8c42b4 ("x86_64, entry: Use sysret to return to userspace when possible")
testbox/testcase/testparams: lkp-wsx02/aim9/performance-300s-creat-clo
b926e6f61a26036e 2a23c6b8a9c42620182a2d2cfc
---------------- --------------------------
%stddev %change %stddev
\ | \
23.75 ± 1% -64.5% 8.44 ± 1% aim9.time.user_time
276 ± 0% +5.5% 291 ± 0% aim9.time.system_time
533594 ± 1% +3.5% 552311 ± 0% aim9.creat-clo.ops_per_sec
1 ± 47% -100.0% 0 ± 0% numa-numastat.node2.other_node
6024 ± 40% -72.3% 1668 ± 48% sched_debug.cpu#10.ttwu_count
6408 ± 45% -71.8% 1806 ± 25% sched_debug.cpu#70.sched_goidle
12980 ± 44% -70.5% 3833 ± 23% sched_debug.cpu#70.nr_switches
6420 ± 47% -76.7% 1494 ± 36% sched_debug.cpu#66.ttwu_count
1328 ± 40% -76.0% 319 ± 16% sched_debug.cfs_rq[18]:/.exec_clock
2329 ± 42% -78.5% 501 ± 37% sched_debug.cpu#10.ttwu_local
815 ± 48% -53.3% 380 ± 43% sched_debug.cfs_rq[30]:/.exec_clock
5427 ± 40% -75.0% 1355 ± 28% sched_debug.cpu#18.ttwu_count
953 ± 45% -62.8% 355 ± 26% sched_debug.cfs_rq[70]:/.exec_clock
1 ± 34% +160.0% 3 ± 33% sched_debug.cpu#56.nr_uninterruptible
63 ± 37% -62.4% 24 ± 23% sched_debug.cfs_rq[3]:/.blocked_load_avg
4838 ± 27% -63.1% 1787 ± 31% sched_debug.cpu#10.sched_goidle
5901 ± 44% -67.6% 1914 ± 25% sched_debug.cpu#66.sched_goidle
4884 ± 28% -64.5% 1733 ± 26% sched_debug.cpu#18.sched_goidle
12006 ± 43% -66.0% 4077 ± 23% sched_debug.cpu#66.nr_switches
770 ± 44% -48.4% 397 ± 42% sched_debug.cfs_rq[6]:/.exec_clock
9861 ± 26% -61.0% 3847 ± 30% sched_debug.cpu#10.nr_switches
9983 ± 28% -62.7% 3723 ± 24% sched_debug.cpu#18.nr_switches
23.75 ± 1% -64.5% 8.44 ± 1% time.user_time
50 ± 45% +143.1% 122 ± 19% sched_debug.cfs_rq[50]:/.blocked_load_avg
55 ± 46% +126.1% 125 ± 19% sched_debug.cfs_rq[50]:/.tg_load_contrib
4591 ± 47% -62.5% 1723 ± 13% sched_debug.cpu#30.sched_goidle
9347 ± 46% -60.5% 3687 ± 12% sched_debug.cpu#30.nr_switches
70 ± 47% +73.9% 121 ± 31% sched_debug.cfs_rq[45]:/.tg_load_contrib
12075 ± 23% -36.3% 7687 ± 4% sched_debug.cpu#70.nr_load_updates
1757 ± 21% -36.0% 1124 ± 27% sched_debug.cfs_rq[63]:/.min_vruntime
16039 ± 36% -39.9% 9638 ± 3% sched_debug.cpu#6.nr_load_updates
56756 ± 4% -35.5% 36623 ± 3% softirqs.RCU
11883 ± 24% -33.7% 7873 ± 4% sched_debug.cpu#66.nr_load_updates
15180 ± 31% -38.8% 9297 ± 3% sched_debug.cpu#14.nr_load_updates
4133 ± 47% -54.2% 1893 ± 31% sched_debug.cpu#2.sched_goidle
8430 ± 46% -49.4% 4265 ± 36% sched_debug.cpu#2.nr_switches
36424 ± 2% +44.3% 52546 ± 3% slabinfo.kmalloc-256.active_objs
36804 ± 2% +43.8% 52915 ± 3% slabinfo.kmalloc-256.num_objs
1149 ± 2% +43.8% 1653 ± 3% slabinfo.kmalloc-256.num_slabs
1149 ± 2% +43.8% 1653 ± 3% slabinfo.kmalloc-256.active_slabs
12722 ± 8% -27.6% 9209 ± 2% sched_debug.cpu#18.nr_load_updates
750 ± 43% +65.8% 1244 ± 16% sched_debug.cpu#38.nr_switches
758 ± 42% +65.0% 1251 ± 15% sched_debug.cpu#38.sched_count
287 ± 40% +70.1% 488 ± 21% sched_debug.cpu#38.sched_goidle
13470 ± 26% -30.7% 9336 ± 6% sched_debug.cpu#22.nr_load_updates
0.00 ± 26% +45.8% 0.00 ± 11% sched_debug.rt_rq[20]:/.rt_time
11704 ± 16% -24.1% 8881 ± 1% sched_debug.cpu#30.nr_load_updates
161 ± 47% +60.4% 258 ± 10% sched_debug.cpu#54.ttwu_local
952 ± 1% +11.9% 1065 ± 5% slabinfo.Acpi-State.num_slabs
952 ± 1% +11.9% 1065 ± 5% slabinfo.Acpi-State.active_slabs
48596 ± 1% +11.9% 54376 ± 5% slabinfo.Acpi-State.num_objs
48595 ± 1% +11.6% 54246 ± 5% slabinfo.Acpi-State.active_objs
475081 ± 4% -10.5% 425176 ± 3% cpuidle.C6-NHM.usage
861 ± 10% +18.9% 1024 ± 2% numa-meminfo.node2.PageTables
2694 ± 8% +13.6% 3059 ± 8% numa-vmstat.node3.nr_slab_reclaimable
10779 ± 8% +13.6% 12240 ± 8% numa-meminfo.node3.SReclaimable
4610 ± 3% -6.6% 4305 ± 5% sched_debug.cfs_rq[64]:/.tg_load_avg
4627 ± 4% -6.6% 4323 ± 4% sched_debug.cfs_rq[63]:/.tg_load_avg
4518 ± 3% -6.1% 4241 ± 5% sched_debug.cfs_rq[70]:/.tg_load_avg
1677 ± 4% -11.9% 1478 ± 2% vmstat.system.cs
1509 ± 1% +2.5% 1546 ± 1% vmstat.system.in
testbox/testcase/testparams: wsm/will-it-scale/performance-unlink2
b926e6f61a26036e 2a23c6b8a9c42620182a2d2cfc
---------------- --------------------------
36.57 ± 0% -39.1% 22.28 ± 1% will-it-scale.time.user_time
192292 ± 1% +3.1% 198236 ± 1% will-it-scale.per_thread_ops
990 ± 0% +1.4% 1004 ± 0% will-it-scale.time.system_time
205532 ± 0% +2.0% 209720 ± 0% will-it-scale.per_process_ops
0.48 ± 0% -1.5% 0.47 ± 0% will-it-scale.scalability
36.57 ± 0% -39.1% 22.28 ± 1% time.user_time
554 ± 37% -36.0% 354 ± 43% sched_debug.cfs_rq[4]:/.tg_load_contrib
583050 ± 15% -20.4% 463937 ± 21% sched_debug.cfs_rq[1]:/.min_vruntime
70523 ± 16% -19.8% 56555 ± 20% sched_debug.cfs_rq[1]:/.exec_clock
63 ± 19% +48.2% 93 ± 8% sched_debug.cfs_rq[5]:/.runnable_load_avg
80 ± 11% -18.0% 66 ± 13% sched_debug.cpu#1.cpu_load[4]
55 ± 4% +26.6% 70 ± 14% sched_debug.cpu#3.cpu_load[4]
82 ± 10% -17.4% 67 ± 14% sched_debug.cpu#1.cpu_load[3]
60 ± 2% +22.1% 73 ± 7% sched_debug.cpu#3.cpu_load[3]
83 ± 10% -16.3% 69 ± 14% sched_debug.cpu#1.cpu_load[2]
90559 ± 8% -17.7% 74537 ± 14% sched_debug.cpu#1.nr_load_updates
67 ± 2% +18.3% 79 ± 8% sched_debug.cpu#5.cpu_load[4]
1.29 ± 7% -6.8% 1.20 ± 7% perf-profile.cpu-cycles.security_inode_init_security.shmem_mknod.shmem_create.vfs_create.do_last
65 ± 2% +15.8% 75 ± 5% sched_debug.cpu#3.cpu_load[2]
68 ± 11% +20.2% 81 ± 4% sched_debug.cpu#7.cpu_load[4]
71 ± 8% +19.0% 84 ± 1% sched_debug.cpu#7.cpu_load[3]
2526 ± 5% -7.0% 2349 ± 5% sched_debug.cpu#8.curr->pid
25711 ± 6% +10.6% 28438 ± 5% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
29189 ± 9% +12.3% 32783 ± 4% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
636 ± 10% +12.3% 714 ± 4% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
75 ± 6% +15.3% 86 ± 2% sched_debug.cpu#7.cpu_load[2]
lkp-wsx02: Westmere-EX
Memory: 128G
wsm: Westmere
Memory: 6G
time.user_time
28 ++---------------------------------------------------------------------+
26 ++ .* |
| * * *. * :*.* .***.* .***. |
24 +*.** + *.* *.***.***.* * .***.**.* .* : * * ** **.* *.**
22 *+ * ** ** * * |
| |
20 ++ |
18 ++ |
16 ++ |
| |
14 ++ |
12 ++ |
| |
10 ++ O OO OOO OOO OOO OO OOO OOO OO OO OOO O |
8 OO-OO---------------------------------O------OO-OO-OOO-O---------------+
aim9.time.user_time
28 ++---------------------------------------------------------------------+
26 ++ .* |
| * * *. * :*.* .***.* .***. |
24 +*.** + *.* *.***.***.* * .***.**.* .* : * * ** **.* *.**
22 *+ * ** ** * * |
| |
20 ++ |
18 ++ |
16 ++ |
| |
14 ++ |
12 ++ |
| |
10 ++ O OO OOO OOO OOO OO OOO OOO OO OO OOO O |
8 OO-OO---------------------------------O------OO-OO-OOO-O---------------+
aim9.time.system_time
294 ++--------------------------------------------------------------------+
292 OO O |
| OOO OOO OOO OO OOO OOO OOO OOO OOO O O OOO OO O |
290 ++ OO OO |
288 ++ |
| |
286 ++ |
284 ++ |
282 ++ |
| |
280 ++ |
278 *+ * *. * |
|*.** + **.**.** .* .***.***.** ** + *. *.* **.***.*|
276 ++ ***.* * ** * *.** **.** :*.* *
274 ++-----------------------------------------*--------------*-----------+
slabinfo.kmalloc-256.active_objs
60000 ++------------------------------------------------------------------+
| |
55000 ++ O O |
O OO OO O O O O O O O OO OO OO |
|O O O O OOO OO O O O O O O O |
50000 ++ O O O O O |
| |
45000 ++ |
| |
40000 ++ |
| * * .* *. * *. |
| **.* *. * ***.***.* **. *.* ***. :* :* :: * *.* :*. * **
35000 **.* ** * :+ * ** * + * *.* * * * * |
| * * |
30000 ++------------------------------------------------------------------+
slabinfo.kmalloc-256.num_objs
60000 ++------------------------------------------------------------------+
| |
55000 ++ O O O |
O OO OO O OO O OO O O O O OO OO OO |
|O O O O O O O O O O O O O O |
50000 ++ O O O |
| |
45000 ++ |
| |
40000 ++ |
| .* .* ** ** * .* *. * *. |
* **.* *.** .*** ** ***. *.* + *. : *.* * :: ***.* :*.** **
35000 +*.* ** * ** ** * * * |
| |
30000 ++------------------------------------------------------------------+
slabinfo.kmalloc-256.active_slabs
1800 ++-------------------------------------------------------------------+
| O O O |
1700 O+ OO OO O O O O O O OOO O O |
1600 +O O OOO O O O O OO O O O |
| O O O O O O O O |
1500 ++ |
| |
1400 ++ |
| |
1300 ++ |
1200 ++ *. * * |
| * * *. **. * * **.* : * + **. **. :*. *.*|
1100 *+ :*.** .** + * * * *.** .* *.* : : ** * ***.* ** *
|*.* * * * * * |
1000 ++-------------------------------------------------------------------+
slabinfo.kmalloc-256.num_slabs
1800 ++-------------------------------------------------------------------+
| O O O |
1700 O+ OO OO O O O O O O OOO O O |
1600 +O O OOO O O O O OO O O O |
| O O O O O O O O |
1500 ++ |
| |
1400 ++ |
| |
1300 ++ |
1200 ++ *. * * |
| * * *. **. * * **.* : * + **. **. :*. *.*|
1100 *+ :*.** .** + * * * *.** .* *.* : : ** * ***.* ** *
|*.* * * * * * |
1000 ++-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Huang, Ying
View attachment "job.yaml" of type "text/plain" (1624 bytes)
_______________________________________________
LKP mailing list
LKP@...ux.intel.com
Powered by blists - more mailing lists