[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1911061330030.155572@chino.kir.corp.google.com>
Date: Wed, 6 Nov 2019 13:32:37 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Michal Hocko <mhocko@...nel.org>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Mel Gorman <mgorman@...e.de>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [patch for-5.3 0/4] revert immediate fallback to remote
hugepages
On Wed, 6 Nov 2019, Michal Hocko wrote:
> > I don't see any
> > indication that this allocation would behave any different than the code
> > that Andrea experienced swap storms with, but now worse if remote memory
> > is in the same state local memory is when he's using __GFP_THISNODE.
>
> The primary reason for the extensive swapping was exactly the __GFP_THISNODE
> in conjunction with an unbounded direct reclaim AFAIR.
>
> The whole point of the Vlastimil's patch is to have an optimistic local
> node allocation first and the full gfp context one in the fallback path.
> If our full gfp context doesn't really work well then we can revisit
> that of course but that should happen at alloc_hugepage_direct_gfpmask
> level.
Since the patch reverts the precaution put into the page allocator to not
attempt reclaim if the allocation order is significantly large and the
return value from compaction specifies it is unlikely to succed on its
own, I believe Vlastimil's patch will cause the same regression that
Andrea saw is the whole host is low on memory and/or significantly
fragmented. So the suggestion was that he test this change to make sure
we aren't introducing a regression for his workload.
Powered by blists - more mailing lists