lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Wed, 18 Jun 2014 13:06:32 +0800
From:	Jet Chen <jet.chen@...el.com>
To:	Andy Lutomirski <luto@...capital.net>
CC:	Fengguang Wu <fengguang.wu@...el.com>,
	Dave Hansen <dave.hansen@...el.com>,
	LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [x86_64,vsyscall] 21d4ab4881a: -11.1% will-it-scale.per_process_ops

Hi Andy,

we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git x86/vsyscall
commit 21d4ab4881ad9b257bec75d04480105dad4336e1 ("x86_64,vsyscall: Move all of the gate_area code to vsyscall_64.c")

test case: lkp-wsx01/will-it-scale/signal1

a7781f1035319a7  21d4ab4881ad9b257bec75d04
---------------  -------------------------
     259032 ~ 1%     -11.1%     230288 ~ 0%  TOTAL will-it-scale.per_process_ops
       0.04 ~ 9%   +4276.2%       1.84 ~ 1%  TOTAL perf-profile.cpu-cycles.map_id_up.do_tkill.sys_tgkill.system_call_fastpath.raise
       2.36 ~ 0%     -63.8%       0.85 ~ 2%  TOTAL perf-profile.cpu-cycles._atomic_dec_and_lock.free_uid.__sigqueue_free.__dequeue_signal.dequeue_signal
       2.25 ~14%     -55.2%       1.01 ~ 1%  TOTAL perf-profile.cpu-cycles.raise
      42.41 ~ 0%     +34.5%      57.04 ~ 0%  TOTAL perf-profile.cpu-cycles.__sigqueue_alloc.__send_signal.send_signal.do_send_sig_info.do_send_specific
      40.70 ~ 0%     -23.9%      30.96 ~ 0%  TOTAL perf-profile.cpu-cycles.__sigqueue_free.part.11.__dequeue_signal.dequeue_signal.get_signal_to_deliver.do_signal
        252 ~11%     -18.8%        204 ~ 9%  TOTAL numa-vmstat.node1.nr_page_table_pages
       1012 ~11%     -18.3%        827 ~ 9%  TOTAL numa-meminfo.node1.PageTables
        520 ~ 7%     -17.1%        431 ~ 5%  TOTAL cpuidle.C1-NHM.usage

Legend:
	~XX%    - stddev percent
	[+-]XX% - change percent


                             will-it-scale.per_process_ops

   270000 ++-----------------------------------------------------------------+
          |                                                                  |
   260000 *+.*..*..     .*..  .*..  .*..*...  .*..   *..  .*..   *..     .*..*
          |        *..*.    *.    *.        *.     ..   *.     ..   *..*.    |
   250000 ++                                      *           *              |
          |                                                                  |
   240000 ++                                                                 |
          |                                                                  |
   230000 ++ O  O  O     O  O                           O     O  O  O  O  O  |
          O                       O     O                                    |
   220000 ++                   O            O     O                          |
          |                          O         O     O     O                 |
   210000 ++          O                                                      |
          |                                                                  |
   200000 ++-----------------------------------------------------------------+


	[*] bisect-good sample
	[O] bisect-bad  sample


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Thanks,
Jet


View attachment "reproduce" of type "text/plain" (5956 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ