[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1910021556270.187014@chino.kir.corp.google.com>
Date: Wed, 2 Oct 2019 16:03:03 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Mike Kravetz <mike.kravetz@...cle.com>,
Michal Hocko <mhocko@...nel.org>
cc: Vlastimil Babka <vbabka@...e.cz>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>
Subject: [rfc] mm, hugetlb: allow hugepage allocations to excessively
reclaim
Hugetlb allocations use __GFP_RETRY_MAYFAIL to aggressively attempt to get
hugepages that the user needs. Commit b39d0ee2632d ("mm, page_alloc:
avoid expensive reclaim when compaction may not succeed") intends to
improve allocator behind for thp allocations to prevent excessive amounts
of reclaim especially when constrained to a single node.
Since hugetlb allocations have explicitly preferred to loop and do reclaim
and compaction, exempt them from this new behavior at least for the time
being. It is not shown that hugetlb allocation success rate has been
impacted by commit b39d0ee2632d but hugetlb allocations are admittedly
beyond the scope of what the patch is intended to address (thp
allocations).
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Signed-off-by: David Rientjes <rientjes@...gle.com>
---
Mike, you eluded that you may want to opt hugetlbfs out of this for the
time being in https://marc.info/?l=linux-kernel&m=156771690024533 --
not sure if you want to allow this excessive amount of reclaim for
hugetlb allocations or not given the swap storms Andrea has shown is
possible (and nr_hugepages_mempolicy does exist), but hugetlbfs was not
part of the problem we are trying to address here so no objection to
opting it out.
You might want to consider how expensive hugetlb allocations can become
and disruptive to the system if it does not yield additional hugepages,
but that can be done at any time later as a general improvement rather
than part of a series aimed at thp.
mm/page_alloc.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4467,12 +4467,14 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
if (page)
goto got_pg;
- if (order >= pageblock_order && (gfp_mask & __GFP_IO)) {
+ if (order >= pageblock_order && (gfp_mask & __GFP_IO) &&
+ !(gfp_mask & __GFP_RETRY_MAYFAIL)) {
/*
* If allocating entire pageblock(s) and compaction
* failed because all zones are below low watermarks
* or is prohibited because it recently failed at this
- * order, fail immediately.
+ * order, fail immediately unless the allocator has
+ * requested compaction and reclaim retry.
*
* Reclaim is
* - potentially very expensive because zones are far
Powered by blists - more mailing lists