lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180124181921.vnivr32q72ey7p5i@techsingularity.net>
Date:   Wed, 24 Jan 2018 18:19:21 +0000
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Dave Hansen <dave.hansen@...el.com>
Cc:     Aaron Lu <aaron.lu@...el.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Huang Ying <ying.huang@...el.com>,
        Kemi Wang <kemi.wang@...el.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Michal Hocko <mhocko@...e.com>,
        Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH 2/2] free_pcppages_bulk: prefetch buddy while not holding
 lock

On Wed, Jan 24, 2018 at 08:57:43AM -0800, Dave Hansen wrote:
> On 01/24/2018 08:43 AM, Mel Gorman wrote:
> > I'm less convinced by this for a microbenchmark. Prefetch has not been a
> > universal win in the past and we cannot be sure that it's a good idea on
> > all architectures or doesn't have other side-effects such as consuming
> > memory bandwidth for data we don't need or evicting cache hot data for
> > buddy information that is not used.
> 
> I had the same reaction.
> 
> But, I think this case is special.  We *always* do buddy merging (well,
> before the next patch in the series is applied) and check an order-0
> page's buddy to try to merge it when it goes into the main allocator.
> So, the cacheline will always come in.
> 
> IOW, I don't think this has the same downsides normally associated with
> prefetch() since the data is always used.

That doesn't side-step the calculations are done twice in the
free_pcppages_bulk path and there is no guarantee that one prefetch
in the list of pages being freed will not evict a previous prefetch
due to collisions. At least on the machine I'm writing this from, the
prefetches necessary for a standard drain are 1/16th of the L1D cache so
some collisions/evictions are possible. We're doing definite work in one
path on the chance it'll still be cache-resident when it's recalculated.
I suspect that only a microbenchmark that is doing very large amounts of
frees (or a large munmap or exit) will notice and the costs of a large
munmap/exit are so high that the prefetch will be negligible savings.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ