[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181211163605.GC12597@edgewater-inn.cambridge.arm.com>
Date: Tue, 11 Dec 2018 16:36:05 +0000
From: Will Deacon <will.deacon@....com>
To: Robin Murphy <robin.murphy@....com>
Cc: catalin.marinas@....com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, jonathan.cameron@...wei.com,
cyrilc@...inx.com, james.morse@....com, anshuman.khandual@....com
Subject: Re: [PATCH] arm64: Add memory hotplug support
On Mon, Dec 10, 2018 at 03:29:01PM +0000, Robin Murphy wrote:
> Wire up the basic support for hot-adding memory. Since memory hotplug
> is fairly tightly coupled to sparsemem, we tweak pfn_valid() to also
> cross-check the presence of a section in the manner of the generic
> implementation, before falling back to memblock to check for no-map
> regions within a present section as before. By having arch_add_memory(()
> create the linear mapping first, this then makes everything work in the
> way that __add_section() expects.
>
> We expect hotplug to be ACPI-driven, so the swapper_pg_dir updates
> should be safe from races by virtue of the global device hotplug lock.
>
> Signed-off-by: Robin Murphy <robin.murphy@....com>
> ---
>
> Looks like I'm not going to have the whole pte_devmap story figured out
> in time to land any ZONE_DEVICE support this cycle, but since this patch
> also stands alone as a complete feature (and has ended up remarkably
> simple and self-contained), I hope we might consider getting it merged
> on its own merit.
>
> Robin.
>
> arch/arm64/Kconfig | 3 +++
> arch/arm64/mm/init.c | 8 ++++++++
> arch/arm64/mm/mmu.c | 12 ++++++++++++
> arch/arm64/mm/numa.c | 10 ++++++++++
> 4 files changed, 33 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 6d2b25f51bb3..7b855ae45747 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -261,6 +261,9 @@ config ZONE_DMA32
> config HAVE_GENERIC_GUP
> def_bool y
>
> +config ARCH_ENABLE_MEMORY_HOTPLUG
> + def_bool y
> +
> config SMP
> def_bool y
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 2983e0fc1786..82e0b08f2e31 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -291,6 +291,14 @@ int pfn_valid(unsigned long pfn)
>
> if ((addr >> PAGE_SHIFT) != pfn)
> return 0;
> +
> +#ifdef CONFIG_SPARSEMEM
> + if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
> + return 0;
> +
> + if (!valid_section(__nr_to_section(pfn_to_section_nr(pfn))))
> + return 0;
I'm a bit nervous about the call to __nr_to_section() here. How do we
ensure that the section number we're passing stays within the bounds of
the mem_section array?
> +#endif
> return memblock_is_map_memory(addr);
> }
> EXPORT_SYMBOL(pfn_valid);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index e1b2d58a311a..22379a74d289 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1044,3 +1044,15 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
> pmd_free(NULL, table);
> return 1;
> }
> +
> +#ifdef CONFIG_MEMORY_HOTPLUG
> +int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap,
> + bool want_memblock)
> +{
> + __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> + size, PAGE_KERNEL, pgd_pgtable_alloc, 0);
> +
> + return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT,
> + altmap, want_memblock);
> +}
If we're mapping the new memory into the linear map, shouldn't we be
respecting rodata_full and debug page alloc by forcing page granularity
and tweaking the permissions?
Will
Powered by blists - more mailing lists