lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191106073521.GC8314@dhcp22.suse.cz>
Date:   Wed, 6 Nov 2019 08:35:21 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     David Rientjes <rientjes@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Mel Gorman <mgorman@...e.de>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>
Subject: Re: [patch for-5.3 0/4] revert immediate fallback to remote hugepages

On Tue 05-11-19 17:01:00, David Rientjes wrote:
> On Tue, 5 Nov 2019, Michal Hocko wrote:
> 
> > > > Thanks, I'll queue this for some more testing.  At some point we should
> > > > decide on a suitable set of Fixes: tags and a backporting strategy, if any?
> > > > 
> > > 
> > > I'd strongly suggest that Andrea test this patch out on his workload on 
> > > hosts where all nodes are low on memory because based on my understanding 
> > > of his reported issue this would result in swap storms reemerging but 
> > > worse this time because they wouldn't be constrained only locally.  (This 
> > > patch causes us to no longer circumvent excessive reclaim when using 
> > > MADV_HUGEPAGE.)
> > 
> > Could you be more specific on why this would be the case? My testing is
> > doesn't show any such signs and I am effectivelly testing memory low
> > situation. The amount of reclaimed memory matches the amount of
> > requested memory.
> > 
> 
> The follow-up allocation in alloc_pages_vma() would no longer use 
> __GFP_NORETRY and there is no special handling to avoid swap storms in the 
> page allocator anymore as a result of this patch.

Yes there is no __GFP_NORETRY in the fallback path because the control
over how hard to retry is controlled by alloc_hugepage_direct_gfpmask
depending on the defrag mode and madvise mode.

> I don't see any 
> indication that this allocation would behave any different than the code 
> that Andrea experienced swap storms with, but now worse if remote memory 
> is in the same state local memory is when he's using __GFP_THISNODE.

The primary reason for the extensive swapping was exactly the __GFP_THISNODE
in conjunction with an unbounded direct reclaim AFAIR.

The whole point of the Vlastimil's patch is to have an optimistic local
node allocation first and the full gfp context one in the fallback path.
If our full gfp context doesn't really work well then we can revisit
that of course but that should happen at alloc_hugepage_direct_gfpmask
level.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ