lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 16 Mar 2017 14:34:22 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Mel Gorman <mgorman@...hsingularity.net>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        David Rientjes <rientjes@...gle.com>, kernel-team@...com
Subject: Re: [PATCH v3 0/8] try to reduce fragmenting fallbacks

On Wed, Mar 08, 2017 at 08:17:39PM +0100, Vlastimil Babka wrote:
> On 8.3.2017 17:46, Johannes Weiner wrote:
> > Is there any other data you would like me to gather?
> 
> If you can enable the extfrag tracepoint, it would be nice to have graphs of how
> unmovable allocations falling back to movable pageblocks, etc.

Okay, here we go. I recorded 24 hours worth of the extfrag tracepoint,
filtered to fallbacks from unmovable requests to movable blocks. I've
uploaded the plot here:

http://cmpxchg.org/antifrag/fallbackrate.png

but this already speaks for itself:

11G     alloc-mtfallback.trace
3.3G    alloc-mtfallback-patched.trace

;)

> Possibly also /proc/pagetypeinfo for numbers of pageblock types.

After a week of uptime, the patched (b) kernel has more movable blocks
than vanilla 4.10-rc8 (a):

   Number of blocks type     Unmovable      Movable  Reclaimable   HighAtomic          CMA      Isolate

a: Node 1, zone   Normal         2017        29763          987            1            0            0
b: Node 1, zone   Normal         1264        30850          653            1            0            0

I sampled this somewhat sporadically over the week and it's been
reading reliably this way.

The patched kernel also consistently beats vanilla in terms of peak
job throughput.

Overall very cool!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ