[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160721073614.24395-4-vbabka@suse.cz>
Date: Thu, 21 Jul 2016 09:36:09 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Michal Hocko <mhocko@...nel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
David Rientjes <rientjes@...gle.com>,
Rik van Riel <riel@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: [PATCH v5 3/8] mm, page_alloc: don't retry initial attempt in slowpath
After __alloc_pages_slowpath() sets up new alloc_flags and wakes up kswapd, it
first tries get_page_from_freelist() with the new alloc_flags, as it may
succeed e.g. due to using min watermark instead of low watermark. It makes
sense to to do this attempt before adjusting zonelist based on
alloc_flags/gfp_mask, as it's still relatively a fast path if we just wake up
kswapd and successfully allocate.
This patch therefore moves the initial attempt above the retry label and
reorganizes a bit the part below the retry label. We still have to attempt
get_page_from_freelist() on each retry, as some allocations cannot do that
as part of direct reclaim or compaction, and yet are not allowed to fail
(even though they do a WARN_ON_ONCE() and thus should not exist). We can reuse
the call meant for ALLOC_NO_WATERMARKS attempt and just set alloc_flags to
ALLOC_NO_WATERMARKS if the context allows it. As a side-effect, the attempts
from direct reclaim/compaction will also no longer obey watermarks once this
is set, but there's little harm in that.
Kswapd wakeups are also done on each retry to be safe from potential races
resulting in kswapd going to sleep while a process (that may not be able to
reclaim by itself) is still looping.
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
Acked-by: Michal Hocko <mhocko@...e.com>
Acked-by: Mel Gorman <mgorman@...hsingularity.net>
Acked-by: David Rientjes <rientjes@...gle.com>
---
mm/page_alloc.c | 29 ++++++++++++++++++-----------
1 file changed, 18 insertions(+), 11 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 25e19d7f3ced..ed31658214a8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3541,35 +3541,42 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
*/
alloc_flags = gfp_to_alloc_flags(gfp_mask);
+ if (gfp_mask & __GFP_KSWAPD_RECLAIM)
+ wake_all_kswapds(order, ac);
+
+ /*
+ * The adjusted alloc_flags might result in immediate success, so try
+ * that first
+ */
+ page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
+ if (page)
+ goto got_pg;
+
+
retry:
+ /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
if (gfp_mask & __GFP_KSWAPD_RECLAIM)
wake_all_kswapds(order, ac);
+ if (gfp_pfmemalloc_allowed(gfp_mask))
+ alloc_flags = ALLOC_NO_WATERMARKS;
+
/*
* Reset the zonelist iterators if memory policies can be ignored.
* These allocations are high priority and system rather than user
* orientated.
*/
- if (!(alloc_flags & ALLOC_CPUSET) || gfp_pfmemalloc_allowed(gfp_mask)) {
+ if (!(alloc_flags & ALLOC_CPUSET) || (alloc_flags & ALLOC_NO_WATERMARKS)) {
ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
ac->high_zoneidx, ac->nodemask);
}
- /* This is the last chance, in general, before the goto nopage. */
+ /* Attempt with potentially adjusted zonelist and alloc_flags */
page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
if (page)
goto got_pg;
- /* Allocate without watermarks if the context allows */
- if (gfp_pfmemalloc_allowed(gfp_mask)) {
-
- page = get_page_from_freelist(gfp_mask, order,
- ALLOC_NO_WATERMARKS, ac);
- if (page)
- goto got_pg;
- }
-
/* Caller is not willing to reclaim, we can't balance anything */
if (!can_direct_reclaim) {
/*
--
2.9.0
Powered by blists - more mailing lists