[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200403103651.GA22681@dhcp22.suse.cz>
Date: Fri, 3 Apr 2020 12:36:51 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Aslan Bakirov <aslan@...com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, kernel-team@...com, riel@...riel.com,
guro@...com, hannes@...xchg.org
Subject: Re: [PATCH 2/2] mm: hugetlb: Use node interface of cma
On Fri 03-04-20 03:18:43, Aslan Bakirov wrote:
> With introduction of numa node interface for CMA, this patch is for using that
> interface for allocating memory on numa nodes if NUMA is configured.
> This will be more efficient and cleaner because first, instead of iterating
> mem range of each numa node, cma_declare_contigueous_nid() will do
> its own address finding if we pass 0 for both min_pfn and max_pfn,
> second, it can also handle caseswhere NUMA is not configured
> by passing NUMA_NO_NODE as an argument.
>
> In addition, checking if desired size of memory is available or not,
> is happening in cma_declare_contiguous_nid() because base and
> limit will be determined there, since 0(any) for base and
> 0(any) for limit is passed as argument to the function.
I have asked to merge this one with the original patch from Roman
several times but it seems this is not going to happen. But whatever.
You have likely missed my review feedback http://lkml.kernel.org/r/20200402172404.GV22681@dhcp22.suse.cz.
The ifdef CONFIG_NUMA for the nid definition is pointless.
> Signed-off-by: Aslan Bakirov <aslan@...com>
After fixing that, feel free to add
Acked-by: Michal Hocko <mhocko@...e.com>
> ---
> mm/hugetlb.c | 40 +++++++++++-----------------------------
> 1 file changed, 11 insertions(+), 29 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index b9f0c903c4cf..62989220c4ff 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5573,42 +5573,24 @@ void __init hugetlb_cma_reserve(int order)
>
> reserved = 0;
> for_each_node_state(nid, N_ONLINE) {
> - unsigned long min_pfn = 0, max_pfn = 0;
> int res;
> -#ifdef CONFIG_NUMA
> - unsigned long start_pfn, end_pfn;
> - int i;
>
> - for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
> - if (!min_pfn)
> - min_pfn = start_pfn;
> - max_pfn = end_pfn;
> - }
> -#else
> - min_pfn = min_low_pfn;
> - max_pfn = max_low_pfn;
> -#endif
> size = min(per_node, hugetlb_cma_size - reserved);
> size = round_up(size, PAGE_SIZE << order);
> -
> - if (size > ((max_pfn - min_pfn) << PAGE_SHIFT) / 2) {
> - pr_warn("hugetlb_cma: cma_area is too big, please try less than %lu MiB\n",
> - round_down(((max_pfn - min_pfn) << PAGE_SHIFT) *
> - nr_online_nodes / 2 / SZ_1M,
> - PAGE_SIZE << order));
> - break;
> - }
> -
> - res = cma_declare_contiguous(PFN_PHYS(min_pfn), size,
> - PFN_PHYS(max_pfn),
> +
> +
> +#ifndef CONFIG_NUMA
> + nid = NUMA_NO_NODE
> +#endif
> + res = cma_declare_contiguous_nid(0, size,
> + 0,
> PAGE_SIZE << order,
> 0, false,
> - "hugetlb", &hugetlb_cma[nid]);
> + "hugetlb", &hugetlb_cma[nid], nid);
> +
> if (res) {
> - phys_addr_t begpa = PFN_PHYS(min_pfn);
> - phys_addr_t endpa = PFN_PHYS(max_pfn);
> - pr_warn("%s: reservation failed: err %d, node %d, [%pap, %pap)\n",
> - __func__, res, nid, &begpa, &endpa);
> + pr_warn("%s: reservation failed: err %d, node %d\n",
> + __func__, res, nid);
> break;
> }
>
> --
> 2.24.1
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists