lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1906131300220.27665@chino.kir.corp.google.com>
Date:   Thu, 13 Jun 2019 13:17:49 -0700 (PDT)
From:   David Rientjes <rientjes@...gle.com>
To:     Michal Hocko <mhocko@...nel.org>
cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...e.de>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Zi Yan <zi.yan@...rutgers.edu>,
        Stefan Priebe - Profihost AG <s.priebe@...fihost.ag>,
        "Kirill A. Shutemov" <kirill@...temov.name>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] Revert "mm, thp: restore node-local hugepage
 allocations"

On Fri, 7 Jun 2019, Michal Hocko wrote:

> > So my proposed change would be:
> >  - give the page allocator a consistent indicator that compaction failed
> >    because we are low on memory (make COMPACT_SKIPPED really mean this),
> >  - if we get this in the page allocator and we are allocating thp, fail,
> >    reclaim is unlikely to help here and is much more likely to be
> >    disruptive
> >      - we could retry compaction if we haven't scanned all memory and
> >        were contended,
> >  - if the hugepage allocation fails, have thp check watermarks for order-0 
> >    pages without any padding,
> >  - if watermarks succeed, fail the thp allocation: we can't allocate
> >    because of fragmentation and it's better to return node local memory,
> 
> Doesn't this lead to the same THP low success rate we have seen with one
> of the previous patches though?
> 

>From my recollection, the only other patch that was tested involved 
__GFP_NORETRY and avoiding reclaim entirely for thp allocations when 
checking for high-order allocations.

This in the page allocator:

                /*
                 * Checks for costly allocations with __GFP_NORETRY, which
                 * includes THP page fault allocations
                 */
                if (costly_order && (gfp_mask & __GFP_NORETRY)) {
			...
			if (compact_result == COMPACT_DEFERRED)
				goto nopage;

Yet there is no handling for COMPACT_SKIPPED (or what my plan above 
defines COMPACT_SKIPPED to be).  I don't think anything was tried that 
tests why compaction failed, i.e. was it because the two scanners met, 
because hugepage-order memory was found available, because the zone lock 
was contended or we hit need_resched(), we're failing even order-0 
watermarks, etc.  I don't think the above plan has been tried, if someone 
has tried it, please let me know.

I haven't seen any objection to disabling reclaim entirely when order-0 
watermarks are failing in compaction.  We simply can't guarantee that it 
is useful work with the current implementation of compaction.  There are 
several reasons that I've enumerated why compaction can still fail even 
after successful reclaim.

The point is that removing __GFP_THISNODE is not a fix for this if the 
remote memory is fragmented as well: it assumes that hugepages are 
available remotely when they aren't available locally otherwise we seem 
swap storms both locally and remotely.  Relying on that is not in the best 
interest of any user of transparent hugepages.

> Let me remind you of the previous semantic I was proposing
> http://lkml.kernel.org/r/20181206091405.GD1286@dhcp22.suse.cz and that
> didn't get shot down. Linus had some follow up ideas on how exactly
> the fallback order should look like and that is fine. We should just
> measure differences between local node cheep base page vs. remote THP on
> _real_ workloads. Any microbenchmark which just measures a latency is
> inherently misleading.
> 

I think both seek to add the possibility of allocating hugepages remotely 
in certain circumstances and that can be influenced by MADV_HUGEPAGE.  I 
don't think we need to try hugepage specific mempolicies unless it is 
shown to be absolutely necessary although a usecase for this could be made 
separate to this discussion.

There's a benefit to faulting remote hugepages over remote pages for 
everybody involved.  My argument is that we can determine the need for 
that based on failed order-0 watermark checks in compaction: if the node 
would require reclaim to even fault a page, it is likely better over the 
long term to fault a remote hugepage.

I think this can be made to work and is not even difficult to do.  If 
anybody has any objection please let me know.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ