[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200403101843.406634-2-aslan@fb.com>
Date: Fri, 3 Apr 2020 03:18:43 -0700
From: Aslan Bakirov <aslan@...com>
To: <akpm@...ux-foundation.org>
CC: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<kernel-team@...com>, <riel@...riel.com>, <guro@...com>,
<mhocko@...nel.org>, <hannes@...xchg.org>,
Aslan Bakirov <aslan@...com>
Subject: [PATCH 2/2] mm: hugetlb: Use node interface of cma
With introduction of numa node interface for CMA, this patch is for using that
interface for allocating memory on numa nodes if NUMA is configured.
This will be more efficient and cleaner because first, instead of iterating
mem range of each numa node, cma_declare_contigueous_nid() will do
its own address finding if we pass 0 for both min_pfn and max_pfn,
second, it can also handle caseswhere NUMA is not configured
by passing NUMA_NO_NODE as an argument.
In addition, checking if desired size of memory is available or not,
is happening in cma_declare_contiguous_nid() because base and
limit will be determined there, since 0(any) for base and
0(any) for limit is passed as argument to the function.
Signed-off-by: Aslan Bakirov <aslan@...com>
---
mm/hugetlb.c | 40 +++++++++++-----------------------------
1 file changed, 11 insertions(+), 29 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b9f0c903c4cf..62989220c4ff 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5573,42 +5573,24 @@ void __init hugetlb_cma_reserve(int order)
reserved = 0;
for_each_node_state(nid, N_ONLINE) {
- unsigned long min_pfn = 0, max_pfn = 0;
int res;
-#ifdef CONFIG_NUMA
- unsigned long start_pfn, end_pfn;
- int i;
- for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
- if (!min_pfn)
- min_pfn = start_pfn;
- max_pfn = end_pfn;
- }
-#else
- min_pfn = min_low_pfn;
- max_pfn = max_low_pfn;
-#endif
size = min(per_node, hugetlb_cma_size - reserved);
size = round_up(size, PAGE_SIZE << order);
-
- if (size > ((max_pfn - min_pfn) << PAGE_SHIFT) / 2) {
- pr_warn("hugetlb_cma: cma_area is too big, please try less than %lu MiB\n",
- round_down(((max_pfn - min_pfn) << PAGE_SHIFT) *
- nr_online_nodes / 2 / SZ_1M,
- PAGE_SIZE << order));
- break;
- }
-
- res = cma_declare_contiguous(PFN_PHYS(min_pfn), size,
- PFN_PHYS(max_pfn),
+
+
+#ifndef CONFIG_NUMA
+ nid = NUMA_NO_NODE
+#endif
+ res = cma_declare_contiguous_nid(0, size,
+ 0,
PAGE_SIZE << order,
0, false,
- "hugetlb", &hugetlb_cma[nid]);
+ "hugetlb", &hugetlb_cma[nid], nid);
+
if (res) {
- phys_addr_t begpa = PFN_PHYS(min_pfn);
- phys_addr_t endpa = PFN_PHYS(max_pfn);
- pr_warn("%s: reservation failed: err %d, node %d, [%pap, %pap)\n",
- __func__, res, nid, &begpa, &endpa);
+ pr_warn("%s: reservation failed: err %d, node %d\n",
+ __func__, res, nid);
break;
}
--
2.24.1
Powered by blists - more mailing lists