[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5b82d52-f1be-0701-e36b-49aae4bb5cdb@redhat.com>
Date: Wed, 2 Dec 2020 10:36:54 +0100
From: David Hildenbrand <david@...hat.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: mhocko@...nel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, vbabka@...e.cz, pasha.tatashin@...een.com
Subject: Re: [RFC PATCH v3 1/4] mm,memory_hotplug: Add
mhp_supports_memmap_on_memory
On 01.12.20 12:51, Oscar Salvador wrote:
> mhp_supports_memmap_on_memory is meant to be used by the caller prior
> to hot-adding memory in order to figure out whether it can enable
> MHP_MEMMAP_ON_MEMORY or not.
>
> Enabling MHP_MEMMAP_ON_MEMORY requires:
>
> - memmap_on_memory_enabled is set (by mhp_memmap_on_memory kernel boot option)
> - CONFIG_SPARSEMEM_VMEMMAP
> - architecture support for altmap
> - hot-added range spans a single memory block
Instead of adding these arch callbacks, what about a config option
ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
that gets selected by the archs with CONFIG_SPARSEMEM_VMEMMAP ?
The mhp_supports_memmap_on_memory() becomes even more trivial.
>
> Note that mhp_memmap_on_memory kernel boot option will be added in
> a coming patch.
I think it makes sense to
a) separate off the arch changes into separate patches, clarifying why
it can be used. Move this patches to the end of the series.
b) Squashing the remainings into patch #2
>
> At the moment, only three architectures support passing altmap when
> building the page tables: x86, POWERPC and ARM.
> Define an arch_support_memmap_on_memory function on those architectures
> that returns true, and define a __weak variant of it that will be used
> on the others.
[...]
> +/*
> + * We want memmap (struct page array) to be self contained.
> + * To do so, we will use the beginning of the hot-added range to build
> + * the page tables for the memmap array that describes the entire range.
> + * Only selected architectures support it with SPARSE_VMEMMAP.
You might want to add how the caller can calculate the necessary size
and that that this calculated piece of memory to be added will be
accessed before onlining these pages. This is e.g., relevant if
virtio-mem, the hyper-v balloon, or xen balloon would want to use this
mechanism. Also, it's somewhat incompatible with standby memory where
memory cannot be accessed prior to onlining. So pointing that access out
might be valuable.
> + */
> +#define MHP_MEMMAP_ON_MEMORY ((__force mhp_t)BIT(1))
> +
> /*
> * Extended parameters for memory hotplug:
> * altmap: alternative allocator for memmap array (optional)
> @@ -129,6 +137,7 @@ extern int try_online_node(int nid);
>
> extern int arch_add_memory(int nid, u64 start, u64 size,
> struct mhp_params *params);
> +extern bool arch_support_memmap_on_memory(void);
> extern u64 max_mem_size;
>
> extern int memhp_online_type_from_str(const char *str);
> @@ -361,6 +370,7 @@ extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map,
> unsigned long pnum);
> extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
> unsigned long nr_pages);
> +extern bool mhp_supports_memmap_on_memory(unsigned long size);
> extern int arch_create_linear_mapping(int nid, u64 start, u64 size,
> struct mhp_params *params);
> void arch_remove_linear_mapping(u64 start, u64 size);
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index a8cef4955907..e3c310225a60 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1011,6 +1011,20 @@ static int online_memory_block(struct memory_block *mem, void *arg)
> return device_online(&mem->dev);
> }
>
> +bool __weak arch_support_memmap_on_memory(void)
> +{
> + return false;
> +}
> +
> +bool mhp_supports_memmap_on_memory(unsigned long size)
> +{
> + if (!arch_support_memmap_on_memory() ||
> + !IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) ||
> + size > memory_block_size_bytes())
> + return false;
> + return true;
You can simplify to
return arch_support_memmap_on_memory() &&
IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) &&
size == memory_block_size_bytes();
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists