[<prev] [next>] [day] [month] [year] [list]
Message-ID: <53D865D3.7030308@intel.com>
Date: Wed, 30 Jul 2014 11:26:11 +0800
From: Aaron Lu <aaron.lu@...el.com>
To: Daniel Borkmann <dborkman@...hat.com>
CC: "David S. Miller" <davem@...emloft.net>,
LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [LKP] [net] 8f61059a96c: +55.7% netperf.Throughput_Mbps
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 8f61059a96c2a29c1cc5a39dfe23d06ef5b4b065 ("net: sctp: improve timer slack calculation for transport HBs")
test case: lkp-wsx02/netperf/300s-200%-10K-SCTP_STREAM_MANY
eb1ac820c61d0d8 8f61059a96c2a29c1cc5a39df
--------------- -------------------------
1349 ~ 2% +55.7% 2101 ~ 1% TOTAL netperf.Throughput_Mbps
170805 ~12% +398.9% 852164 ~14% TOTAL cpuidle.C1-NHM.usage
4.18 ~ 5% -100.0% 0.00 ~ 0% TOTAL perf-profile.cpu-cycles._raw_spin_lock_irqsave.mod_timer.sctp_transport_reset_timers.sctp_outq_flush.sctp_outq_uncork
4.42 ~ 2% -99.5% 0.02 ~18% TOTAL perf-profile.cpu-cycles.lock_timer_base.isra.34.mod_timer.sctp_transport_reset_timers.sctp_outq_flush.sctp_outq_uncork
10.60 ~ 2% -99.7% 0.03 ~14% TOTAL perf-profile.cpu-cycles.sctp_transport_timeout.sctp_transport_reset_timers.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter
1.77 ~ 4% -78.8% 0.38 ~ 4% TOTAL perf-profile.cpu-cycles.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter.sctp_do_sm.sctp_assoc_bh_rcv
3.42 ~11% +196.0% 10.11 ~ 5% TOTAL perf-profile.cpu-cycles._raw_spin_lock.free_one_page.__free_pages_ok.__free_pages.__free_kmem_pages
3.52 ~10% +174.1% 9.64 ~ 6% TOTAL perf-profile.cpu-cycles._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.alloc_kmem_pages_node.kmalloc_large_node
14540720 ~17% +162.1% 38114453 ~15% TOTAL cpuidle.C1-NHM.time
18604327 ~ 7% -43.0% 10606401 ~12% TOTAL cpuidle.C1E-NHM.time
5.4e+09 ~ 2% +55.7% 8.406e+09 ~ 1% TOTAL proc-vmstat.pgfree
5.29e+09 ~ 2% +55.4% 8.22e+09 ~ 1% TOTAL proc-vmstat.pgalloc_normal
0.17 ~10% +52.9% 0.27 ~21% TOTAL turbostat.%c1
1294 ~ 4% -17.2% 1071 ~11% TOTAL numa-vmstat.node0.nr_page_table_pages
5176 ~ 4% -17.2% 4288 ~11% TOTAL numa-meminfo.node0.PageTables
1.58 ~ 3% +20.3% 1.90 ~ 0% TOTAL perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_kmem_pages_node.kmalloc_large_node.__kmalloc_node_track_caller
2.60 ~ 1% +16.3% 3.02 ~ 1% TOTAL perf-profile.cpu-cycles.memcpy.sctp_packet_transmit_chunk.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter
6.49 ~ 0% +12.8% 7.32 ~ 2% TOTAL perf-profile.cpu-cycles.copy_user_generic_string.skb_copy_datagram_iovec.sctp_recvmsg.sock_common_recvmsg.sock_recvmsg
11700 ~ 4% +18.7% 13891 ~ 7% TOTAL proc-vmstat.pgmigrate_success
11700 ~ 4% +18.7% 13891 ~ 7% TOTAL proc-vmstat.numa_pages_migrated
139702 ~ 1% +16.3% 162521 ~ 5% TOTAL proc-vmstat.numa_hint_faults
977 ~ 2% +16.0% 1133 ~ 8% TOTAL proc-vmstat.pgmigrate_fail
6.81 ~ 0% +12.2% 7.64 ~ 2% TOTAL perf-profile.cpu-cycles.copy_user_generic_string.sctp_user_addto_chunk.sctp_datamsg_from_user.sctp_sendmsg.inet_sendmsg
4.98 ~ 0% +11.8% 5.57 ~ 2% TOTAL perf-profile.cpu-cycles.memcpy.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter.sctp_do_sm
160357 ~ 1% +14.2% 183117 ~ 4% TOTAL proc-vmstat.numa_pte_updates
1653 ~ 3% -8.5% 1512 ~ 4% TOTAL numa-vmstat.node0.nr_alloc_batch
310335 ~ 2% +55.9% 483905 ~ 1% TOTAL vmstat.system.cs
Legend:
~XX% - stddev percent
[+-]XX% - change percent
netperf.Throughput_Mbps
2300 ++-------------------------------------------------------------------+
2200 O+OO O O |
| O O OO OO O OO O O O O O O |
2100 ++ O O O O O O O |
2000 ++ O |
| O |
1900 ++ |
1800 ++ |
1700 ++ |
| |
1600 ++ |
1500 ++ |
| *. |
1400 *+**.*.**.*. *.**.*.**.*.**.*.**.*. *.**. .**.*. : *.**.*. .*.**.|
1300 ++----------*----------------------*-----*------*---------**-**------*
vmstat.system.cs
650000 ++-----------------------------------------------------------------+
| |
600000 ++ O |
| O O |
550000 O+ O O |
| |
500000 ++ OO OO O OO OO O O O O O |
| O O O O O O O O |
450000 ++ O |
| |
400000 ++ |
| |
350000 ++ |
*. *.* .*.* *. .* .*.**.**.*. *.* *.*. .**.*.**. *.* |
300000 ++*---*----*-*--*--*-**-----------*---*-*----**---------**-*-*---*-*
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Aaron
View attachment "reproduce" of type "text/plain" (14081 bytes)
Powered by blists - more mailing lists