[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65164fdb-38b0-e239-84cc-ab22b92214c5@suse.cz>
Date: Thu, 2 Apr 2020 17:48:00 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Aslan Bakirov <aslan@...com>, akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kernel-team@...com, riel@...riel.com, guro@...com,
mhocko@...nel.org, hannes@...xchg.org
Subject: Re: [PATCH 1/2] mm: cma: NUMA node interface
On 3/26/20 10:27 PM, Aslan Bakirov wrote:
> I've noticed that there is no interfaces exposed by CMA which would let me
> to declare contigous memory on particular NUMA node.
>
> This patchset adds the ability to try to allocate contiguous memory on
> specific node.
I would say more explicitly that 'try' here means it will fallback to other
nodes if the specific one doesn't work. At least AFAICS that's what it does by
calling memblock_alloc_range_nid() with exact_nid=false.
> Implement a new method for declaring contigous memory on particular node
> and keep cma_declare_contiguous() as a wrapper.
Should there be also support for using this node spcification in the cma=X boot
param?
> Signed-off-by: Aslan Bakirov <aslan@...com>
...
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -220,7 +220,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> }
>
> /**
> - * cma_declare_contiguous() - reserve custom contiguous area
> + * cma_declare_contiguous_nid() - reserve custom contiguous area
> * @base: Base address of the reserved area optional, use 0 for any
> * @size: Size of the reserved area (in bytes),
> * @limit: End address of the reserved memory (optional, 0 for any).
> @@ -229,6 +229,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> * @fixed: hint about where to place the reserved area
> * @name: The name of the area. See function cma_init_reserved_mem()
> * @res_cma: Pointer to store the created cma region.
> + * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
The bit about fallback should be also specified here.
> *
> * This function reserves memory from early allocator. It should be
> * called by arch specific code once the early allocator (memblock or bootmem)
> @@ -238,10 +239,10 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> * If @fixed is true, reserve contiguous area at exactly @base. If false,
> * reserve in range from @base to @limit.
> */
> -int __init cma_declare_contiguous(phys_addr_t base,
> +int __init cma_declare_contiguous_nid(phys_addr_t base,
> phys_addr_t size, phys_addr_t limit,
> phys_addr_t alignment, unsigned int order_per_bit,
> - bool fixed, const char *name, struct cma **res_cma)
> + bool fixed, const char *name, struct cma **res_cma, int nid)
> {
> phys_addr_t memblock_end = memblock_end_of_DRAM();
> phys_addr_t highmem_start;
Powered by blists - more mailing lists