[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1627970362-61305-4-git-send-email-feng.tang@intel.com>
Date: Tue, 3 Aug 2021 13:59:20 +0800
From: Feng Tang <feng.tang@...el.com>
To: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Dave Hansen <dave.hansen@...el.com>,
Ben Widawsky <ben.widawsky@...el.com>
Cc: linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
Andrea Arcangeli <aarcange@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Mike Kravetz <mike.kravetz@...cle.com>,
Randy Dunlap <rdunlap@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Andi Kleen <ak@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>, ying.huang@...el.com,
Feng Tang <feng.tang@...el.com>
Subject: [PATCH v7 3/5] mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY
From: Ben Widawsky <ben.widawsky@...el.com>
Implement the missing huge page allocation functionality while obeying
the preferred node semantics. This is similar to the implementation
for general page allocation, as it uses a fallback mechanism to try
multiple preferred nodes first, and then all other nodes.
[akpm: fix compling issue when merging with other hugetlb patch]
[Thanks to 0day bot for catching the missing #ifdef CONFIG_NUMA issue]
Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com
Suggested-by: Michal Hocko <mhocko@...e.com>
Signed-off-by: Ben Widawsky <ben.widawsky@...el.com>
Co-developed-by: Feng Tang <feng.tang@...el.com>
Signed-off-by: Feng Tang <feng.tang@...el.com>
---
mm/hugetlb.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 95714fb28150..9279f6d478d9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1166,7 +1166,20 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
gfp_mask = htlb_alloc_mask(h);
nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
+#ifdef CONFIG_NUMA
+ if (mpol->mode == MPOL_PREFERRED_MANY) {
+ page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+ if (page)
+ goto check_reserve;
+ /* Fallback to all nodes */
+ nodemask = NULL;
+ }
+#endif
page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+
+#ifdef CONFIG_NUMA
+check_reserve:
+#endif
if (page && !avoid_reserve && vma_has_reserves(vma, chg)) {
SetHPageRestoreReserve(page);
h->resv_huge_pages--;
@@ -2147,6 +2160,21 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
nodemask_t *nodemask;
nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask);
+#ifdef CONFIG_NUMA
+ if (mpol->mode == MPOL_PREFERRED_MANY) {
+ gfp_t gfp = gfp_mask | __GFP_NOWARN;
+
+ gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
+ page = alloc_surplus_huge_page(h, gfp, nid, nodemask, false);
+ if (page) {
+ mpol_cond_put(mpol);
+ return page;
+ }
+
+ /* Fallback to all nodes */
+ nodemask = NULL;
+ }
+#endif
page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask, false);
mpol_cond_put(mpol);
--
2.14.1
Powered by blists - more mailing lists