lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 25 Feb 2014 11:26:53 -0800
From:	Jason Low <jason.low2@...com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, Waiman Long <waiman.long@...com>,
	mingo@...nel.org, paulmck@...ux.vnet.ibm.com,
	torvalds@...ux-foundation.org, tglx@...utronix.de, riel@...hat.com,
	davidlohr@...com, hpa@...or.com, andi@...stfloor.org, aswin@...com,
	scott.norton@...com, chegu_vinod@...com
Subject: Re: [PATCH 0/8] locking/core patches

On Mon, 2014-02-10 at 15:02 -0800, Andrew Morton wrote:
> On Mon, 10 Feb 2014 20:58:20 +0100 Peter Zijlstra <peterz@...radead.org> wrote:
> 
> > Hi all,
> > 
> > I would propose merging the following patches...
> > 
> > The first set is mostly from Jason and tweaks the mutex adaptive
> > spinning, AIM7 throughput numbers:
> > 
> > PRE:  100   2000.04  21564.90 2721.29 311.99     3.12       0.01     0.00     99
> > POST: 100   2000.04  42603.85 5142.80 311.99     3.12       0.00     0.00     99
> 
> What do these columns represent?  I'm guessing the large improvement
> was in context switches?

Hello,

I also re-tested the mutex patches 1-6 on my 2 and 8 socket machines
with the high_systime and fserver AIM7 workloads (ran on disk). The
workloads are able to generate contention on the 
&EXT4_SB(inode->i_sb)->s_orphan_lock mutex. Below are the % improvement
in throughput with the patches on a recent tip kernel. The main benefits
were on the larger box and when there were higher number of users.

Note: the -0.7% drop in performance for fserver at 10-90 users on the 2
socket machine was mainly due to "[PATCH 6/8] mutex: Extra reschedule
point". Without patch 6, there was almost no % difference in throughput
between the baseline kernel and kernel with patches 1-5.


8 socket machine:

--------------------------
       	 fserver   
--------------------------
users     | % improvement
          | in throughput
          | with patches
--------------------------
1000-2000 |  +29.2%
--------------------------
100-900   |  +10.0%
--------------------------
10-90     |   +0.4%


--------------------------
       high_systime
--------------------------
users     | % improvement
          | in throughput
          | with patches
--------------------------
1000-2000 |  +34.9%
--------------------------
100-900   |  +49.2%
--------------------------
10-90     |   +3.1%



2 socket machine:

--------------------------
         fserver   
--------------------------
users     | % improvement
          | in throughput
          | with patches
--------------------------
1000-2000 |   +1.8%
--------------------------
100-900   |   +0.0%
--------------------------
10-90     |   -0.7%


--------------------------
       high_systime
--------------------------
users     | % improvement
          | in throughput
          | with patches
--------------------------
1000-2000 |   +0.8%
--------------------------
100-900   |   +0.4%
--------------------------
10-90     |   +0.0%




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ