[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200327080610.GV27965@dhcp22.suse.cz>
Date: Fri, 27 Mar 2020 09:06:10 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Aslan Bakirov <aslan@...com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, kernel-team@...com, riel@...riel.com,
guro@...com, hannes@...xchg.org
Subject: Re: [PATCH 2/2] mm: hugetlb: Use node interface of cma
On Thu 26-03-20 14:27:18, Aslan Bakirov wrote:
> With introduction of numa node interface for CMA, this patch is for using that
> interface for allocating memory on numa nodes if NUMA is configured.
> This will be more efficient and cleaner because first, instead of iterating
> mem range of each numa node, cma_declare_contigueous_nid() will do
> its own address finding if we pass 0 for both min_pfn and max_pfn,
> second, it can also handle caseswhere NUMA is not configured
> by passing NUMA_NO_NODE as an argument.
>
> In addition, checking if desired size of memory is available or not,
> is happening in cma_declare_contiguous_nid() because base and
> limit will be determined there, since 0(any) for base and
> 0(any) for limit is passed as argument to the function.
This looks much better than the original patch. Can we simply squash
your and Roman's patch in the mmotm tree and post it for the review in
one piece? It would be slightly easier to review that way.
> Signed-off-by: Aslan Bakirov <aslan@...com>
Thanks!
> ---
> mm/hugetlb.c | 40 +++++++++++-----------------------------
> 1 file changed, 11 insertions(+), 29 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index b9f0c903c4cf..62989220c4ff 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5573,42 +5573,24 @@ void __init hugetlb_cma_reserve(int order)
>
> reserved = 0;
> for_each_node_state(nid, N_ONLINE) {
> - unsigned long min_pfn = 0, max_pfn = 0;
> int res;
> -#ifdef CONFIG_NUMA
> - unsigned long start_pfn, end_pfn;
> - int i;
>
> - for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
> - if (!min_pfn)
> - min_pfn = start_pfn;
> - max_pfn = end_pfn;
> - }
> -#else
> - min_pfn = min_low_pfn;
> - max_pfn = max_low_pfn;
> -#endif
> size = min(per_node, hugetlb_cma_size - reserved);
> size = round_up(size, PAGE_SIZE << order);
> -
> - if (size > ((max_pfn - min_pfn) << PAGE_SHIFT) / 2) {
> - pr_warn("hugetlb_cma: cma_area is too big, please try less than %lu MiB\n",
> - round_down(((max_pfn - min_pfn) << PAGE_SHIFT) *
> - nr_online_nodes / 2 / SZ_1M,
> - PAGE_SIZE << order));
> - break;
> - }
> -
> - res = cma_declare_contiguous(PFN_PHYS(min_pfn), size,
> - PFN_PHYS(max_pfn),
> +
> +
> +#ifndef CONFIG_NUMA
> + nid = NUMA_NO_NODE
> +#endif
> + res = cma_declare_contiguous_nid(0, size,
> + 0,
> PAGE_SIZE << order,
> 0, false,
> - "hugetlb", &hugetlb_cma[nid]);
> + "hugetlb", &hugetlb_cma[nid], nid);
> +
> if (res) {
> - phys_addr_t begpa = PFN_PHYS(min_pfn);
> - phys_addr_t endpa = PFN_PHYS(max_pfn);
> - pr_warn("%s: reservation failed: err %d, node %d, [%pap, %pap)\n",
> - __func__, res, nid, &begpa, &endpa);
> + pr_warn("%s: reservation failed: err %d, node %d\n",
> + __func__, res, nid);
> break;
> }
>
> --
> 2.17.1
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists