lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57fd532f-8fb7-33c4-914a-fb816db47ea9@intel.com>
Date:   Mon, 5 Feb 2018 14:17:18 -0800
From:   Dave Hansen <dave.hansen@...el.com>
To:     Aaron Lu <aaron.lu@...el.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Huang Ying <ying.huang@...el.com>,
        Kemi Wang <kemi.wang@...el.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Michal Hocko <mhocko@...e.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Daniel Jordan <daniel.m.jordan@...cle.com>
Subject: Re: [RFC PATCH 1/2] __free_one_page: skip merge for order-0 page
 unless compaction is in progress

On 02/04/2018 09:31 PM, Aaron Lu wrote:
> Running will-it-scale/page_fault1 process mode workload on a 2 sockets
> Intel Skylake server showed severe lock contention of zone->lock, as
> high as about 80%(43% on allocation path and 38% on free path) CPU
> cycles are burnt spinning. With perf, the most time consuming part inside
> that lock on free path is cache missing on page structures, mostly on
> the to-be-freed page's buddy due to merging.
> 
> One way to avoid this overhead is not do any merging at all for order-0
> pages and leave the need for high order pages to compaction.

I think the RFC here is: we *know* this hurts high-order allocations and
Aaron demonstrated that it does make the latency worse.  But,
unexpectedly, it didn't totally crater them.

So, is the harm to large allocations worth the performance benefit
afforded to smaller ones by this patch?  How would we make a decision on
something like that?

If nothing else, this would make a nice companion topic to Daniel
Jordan's "lru_lock scalability" proposal for LSF/MM.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ