[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210119043920.155044-7-pasha.tatashin@soleen.com>
Date: Mon, 18 Jan 2021 23:39:12 -0500
From: Pavel Tatashin <pasha.tatashin@...een.com>
To: pasha.tatashin@...een.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, akpm@...ux-foundation.org, vbabka@...e.cz,
mhocko@...e.com, david@...hat.com, osalvador@...e.de,
dan.j.williams@...el.com, sashal@...nel.org,
tyhicks@...ux.microsoft.com, iamjoonsoo.kim@....com,
mike.kravetz@...cle.com, rostedt@...dmis.org, mingo@...hat.com,
jgg@...pe.ca, peterz@...radead.org, mgorman@...e.de,
willy@...radead.org, rientjes@...gle.com, jhubbard@...dia.com,
linux-doc@...r.kernel.org, ira.weiny@...el.com,
linux-kselftest@...r.kernel.org
Subject: [PATCH v5 06/14] mm: apply per-task gfp constraints in fast path
Function current_gfp_context() is called after fast path. However, soon we
will add more constraints which will also limit zones based on context.
Move this call into fast path, and apply the correct constraints for all
allocations.
Also update .reclaim_idx based on value returned by current_gfp_context()
because it soon will modify the allowed zones.
Note:
With this patch we will do one extra current->flags load during fast path,
but we already load current->flags in fast-path:
__alloc_pages_nodemask()
prepare_alloc_pages()
current_alloc_flags(gfp_mask, *alloc_flags);
Later, when we add the zone constrain logic to current_gfp_context() we
will be able to remove current->flags load from current_alloc_flags, and
therefore return fast-path to the current performance level.
Suggested-by: Michal Hocko <mhocko@...nel.org>
Signed-off-by: Pavel Tatashin <pasha.tatashin@...een.com>
Acked-by: Michal Hocko <mhocko@...e.com>
---
mm/page_alloc.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0114cdfe4aae..de9bcd08d002 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4979,6 +4979,13 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
}
gfp_mask &= gfp_allowed_mask;
+ /*
+ * Apply scoped allocation constraints. This is mainly about GFP_NOFS
+ * resp. GFP_NOIO which has to be inherited for all allocation requests
+ * from a particular context which has been marked by
+ * memalloc_no{fs,io}_{save,restore}.
+ */
+ gfp_mask = current_gfp_context(gfp_mask);
alloc_mask = gfp_mask;
if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags))
return NULL;
@@ -4994,13 +5001,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
if (likely(page))
goto out;
- /*
- * Apply scoped allocation constraints. This is mainly about GFP_NOFS
- * resp. GFP_NOIO which has to be inherited for all allocation requests
- * from a particular context which has been marked by
- * memalloc_no{fs,io}_{save,restore}.
- */
- alloc_mask = current_gfp_context(gfp_mask);
+ alloc_mask = gfp_mask;
ac.spread_dirty_pages = false;
/*
--
2.25.1
Powered by blists - more mailing lists