[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1dfd4b33-6eff-160e-52fd-994d9bcbffed@oracle.com>
Date: Wed, 21 Mar 2018 13:44:25 -0400
From: Daniel Jordan <daniel.m.jordan@...cle.com>
To: Aaron Lu <aaron.lu@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Vlastimil Babka <vbabka@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Huang Ying <ying.huang@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Kemi Wang <kemi.wang@...el.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Andi Kleen <ak@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [RFC PATCH v2 0/4] Eliminate zone->lock contention for
will-it-scale/page_fault1 and parallel free
On 03/20/2018 04:54 AM, Aaron Lu wrote:
...snip...
> reduced zone->lock contention on free path from 35% to 1.1%. Also, it
> shows good result on parallel free(*) workload by reducing zone->lock
> contention from 90% to almost zero(lru lock increased from almost 0 to
> 90% though).
Hi Aaron, I'm looking through your series now. Just wanted to mention that I'm seeing the same interaction between zone->lock and lru_lock in my own testing. IOW, it's not enough to fix just one or the other: both need attention to get good performance on a big system, at least in this microbenchmark we've both been using.
There's anti-scaling at high core counts where overall system page faults per second actually decrease with more CPUs added to the test. This happens when either zone->lock or lru_lock contention are completely removed, but the anti-scaling goes away when both locks are fixed.
Anyway, I'll post some actual data on this stuff soon.
Powered by blists - more mailing lists