lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed,  9 Mar 2016 17:55:34 -0800
From:	Davidlohr Bueso <dbueso@...e.de>
To:	mingo@...nel.org
Cc:	peterz@...radead.org, dave@...olabs.net,
	linux-kernel@...r.kernel.org
Subject: [PATCH -tip 0/2] kernel/smp: Small csd_lock optimizations

From: Davidlohr Bueso <dave@...olabs.net>

Hi,

Justifications are in each patch, there is slight impact (patch 2)
on some tlb flushing intensive benchmarks (albeit using ipi batching
nowadays).  Specifically for the pft
benchmark, on a 12-core box:

pft faults
                              4.4                         4.4
                          vanilla                         smp
Hmean    faults/cpu-1   801432.1608 (  0.00%)  795719.8859 ( -0.71%)
Hmean    faults/cpu-3   702578.6659 (  0.00%)  752796.6960 (  7.15%)
Hmean    faults/cpu-5   606080.3473 (  0.00%)  595890.0451 ( -1.68%)
Hmean    faults/cpu-7   460369.0724 (  0.00%)  485283.6343 (  5.41%)
Hmean    faults/cpu-12  294445.4701 (  0.00%)  298300.6011 (  1.31%)
Hmean    faults/cpu-18  213156.0860 (  0.00%)  213584.2741 (  0.20%)
Hmean    faults/cpu-24  153104.2995 (  0.00%)  153198.8473 (  0.06%)
Hmean    faults/sec-1   796329.3184 (  0.00%)  614222.4594 (-22.87%)
Hmean    faults/sec-3  1947806.7372 (  0.00%) 2169267.1582 ( 11.37%)
Hmean    faults/sec-5  2611152.0422 (  0.00%) 2544652.6871 ( -2.55%)
Hmean    faults/sec-7  2493705.4668 (  0.00%) 2674847.5270 (  7.26%)
Hmean    faults/sec-12 2583139.7724 (  0.00%) 2614404.6002 (  1.21%)
Hmean    faults/sec-18 2661410.8170 (  0.00%) 2683427.0703 (  0.83%)
Hmean    faults/sec-24 2670463.4814 (  0.00%) 2666221.6332 ( -0.16%)
Stddev   faults/cpu-1    27537.6676 (  0.00%)   25753.4945 (  6.48%)
Stddev   faults/cpu-3    62616.8041 (  0.00%)   44728.0990 ( 28.57%)
Stddev   faults/cpu-5    70976.9184 (  0.00%)   74720.5716 ( -5.27%)
Stddev   faults/cpu-7    47426.5952 (  0.00%)   32758.2705 ( 30.93%)
Stddev   faults/cpu-12    6951.8792 (  0.00%)    9097.0782 (-30.86%)
Stddev   faults/cpu-18    4293.1696 (  0.00%)    5826.9446 (-35.73%)
Stddev   faults/cpu-24    3195.0939 (  0.00%)    3373.7230 ( -5.59%)
Stddev   faults/sec-1    27315.3093 (  0.00%)  148601.7795 (-444.02%)
Stddev   faults/sec-3   271560.5941 (  0.00%)  193681.0177 ( 28.68%)
Stddev   faults/sec-5   429633.7378 (  0.00%)  458426.3306 ( -6.70%)
Stddev   faults/sec-7   338229.0746 (  0.00%)  226146.3450 ( 33.14%)
Stddev   faults/sec-12   57766.4604 (  0.00%)   82734.3638 (-43.22%)
Stddev   faults/sec-18  118572.1909 (  0.00%)  134966.7210 (-13.83%)
Stddev   faults/sec-24   57452.7350 (  0.00%)   57542.7755 ( -0.16%)

                 4.4         4.4
             vanilla         smp
User           11.91       11.85
System        197.11      194.69
Elapsed        44.24       40.26

While the single thread is an abnormality, overall we don't seem
to do any harm (noise range). Could be give or take, but overall
the patches at least make some sense afaict.

Thanks!

Davidlohr Bueso (2):
  kernel/smp: Explicitly inline cds_lock helpers
  kernel/smp: Use make csd_lock_wait be smp_cond_acquire

 kernel/smp.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

--
2.1.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ