lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 30 Mar 2018 09:42:17 +0800
From:   Aaron Lu <aaron.lu@...el.com>
To:     Daniel Jordan <daniel.m.jordan@...cle.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Huang Ying <ying.huang@...el.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Kemi Wang <kemi.wang@...el.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Michal Hocko <mhocko@...e.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Matthew Wilcox <willy@...radead.org>
Subject: Re: [RFC PATCH v2 0/4] Eliminate zone->lock contention for
 will-it-scale/page_fault1 and parallel free

On Thu, Mar 29, 2018 at 03:19:46PM -0400, Daniel Jordan wrote:
> On 03/20/2018 04:54 AM, Aaron Lu wrote:
> > This series is meant to improve zone->lock scalability for order 0 pages.
> > With will-it-scale/page_fault1 workload, on a 2 sockets Intel Skylake
> > server with 112 CPUs, CPU spend 80% of its time spinning on zone->lock.
> > Perf profile shows the most time consuming part under zone->lock is the
> > cache miss on "struct page", so here I'm trying to avoid those cache
> > misses.
> 
> I ran page_fault1 comparing 4.16-rc5 to your recent work, these four patches
> plus the three others from your github branch zone_lock_rfc_v2. Out of
> curiosity I also threw in another 4.16-rc5 with the pcp batch size adjusted
> so high (10922 pages) that we always stay in the pcp lists and out of buddy
> completely.  I used your patch[*] in this last kernel.
> 
> This was on a 2-socket, 20-core broadwell server.
> 
> There were some small regressions a bit outside the noise at low process
> counts (2-5) but I'm not sure they're repeatable.  Anyway, it does improve
> the microbenchmark across the board.

Thanks for the result.

The limited improvement is expected since lock contention only shifts,
not entirely gone. So what is interesting to see is how it performs with
v4.16-rc5 + my_zone_lock_patchset + your_lru_lock_patchset

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ