lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 10 Sep 2018 10:39:25 +0200
From:   Ingo Molnar <mingo@...nel.org>
To:     Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Rik van Riel <riel@...riel.com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 2/6] mm/migrate: Use trylock while resetting rate limit


* Srikar Dronamraju <srikar@...ux.vnet.ibm.com> wrote:

> Since this spinlock will only serialize migrate rate limiting,
> convert the spinlock to a trylock. If another task races ahead of this task
> then this task can simply move on.
> 
> While here, add correct two abnormalities.
> - Avoid time being stretched for every interval.
> - Use READ/WRITE_ONCE with next window.
> 
> specjbb2005 / bops/JVM / higher bops are better
> on 2 Socket/2 Node Intel
> JVMS  Prev    Current  %Change
> 4     206350  200892   -2.64502
> 1     319963  325766   1.81365
> 
> 
> on 2 Socket/2 Node Power9 (PowerNV)
> JVMS  Prev    Current  %Change
> 4     186539  190261   1.99529
> 1     220344  195305   -11.3636
> 
> 
> on 4 Socket/4 Node Power7
> JVMS  Prev    Current  %Change
> 8     56836   57651.1  1.43413
> 1     112970  111351   -1.43312
> 
> 
> dbench / transactions / higher numbers are better
> on 2 Socket/2 Node Intel
> count  Min      Max      Avg      Variance  %Change
> 5      13136.1  13170.2  13150.2  14.7482
> 5      12254.7  12331.9  12297.8  28.1846   -6.48203
> 
> 
> on 2 Socket/4 Node Power8 (PowerNV)
> count  Min      Max      Avg      Variance  %Change
> 5      4319.79  4998.19  4836.53  261.109
> 5      4997.83  5030.14  5015.54  12.947    3.70121
> 
> 
> on 2 Socket/2 Node Power9 (PowerNV)
> count  Min      Max      Avg      Variance  %Change
> 5      9325.56  9402.7   9362.49  25.9638
> 5      9331.84  9375.11  9352.04  16.0703   -0.111616
> 
> 
> on 4 Socket/4 Node Power7
> count  Min      Max      Avg      Variance  %Change
> 5      132.581  191.072  170.554  21.6444
> 5      147.55   181.605  168.963  11.3513   -0.932842

Firstly, *please* always characterize benchmark runs. What did you find? How should we 
interpret the result? Are there any tradeoffs?

*Don't* just dump them on us.

Because in this particular case the results are not obvious, at all:

> specjbb2005 / bops/JVM / higher bops are better
> on 2 Socket/2 Node Intel
> JVMS  Prev    Current  %Change
> 4     206350  200892   -2.64502
> 1     319963  325766   1.81365
> 
> 
> on 2 Socket/2 Node Power9 (PowerNV)
> JVMS  Prev    Current  %Change
> 4     186539  190261   1.99529
> 1     220344  195305   -11.3636
> 
> 
> on 4 Socket/4 Node Power7
> JVMS  Prev    Current  %Change
> 8     56836   57651.1  1.43413
> 1     112970  111351   -1.43312

Why is this better? The largest drop is 11% which seems significant.

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ