[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7bc09505-72f1-e297-40a9-639b3e9b1c61@arm.com>
Date: Thu, 8 Apr 2021 10:42:43 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Mike Rapoport <rppt@...nel.org>,
linux-arm-kernel@...ts.infradead.org
Cc: Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
David Hildenbrand <david@...hat.com>,
Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mike Rapoport <rppt@...ux.ibm.com>,
Will Deacon <will@...nel.org>, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC/RFT PATCH 3/3] arm64: drop pfn_valid_within() and simplify
pfn_valid()
On 4/7/21 10:56 PM, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@...ux.ibm.com>
>
> The arm64's version of pfn_valid() differs from the generic because of two
> reasons:
>
> * Parts of the memory map are freed during boot. This makes it necessary to
> verify that there is actual physical memory that corresponds to a pfn
> which is done by querying memblock.
>
> * There are NOMAP memory regions. These regions are not mapped in the
> linear map and until the previous commit the struct pages representing
> these areas had default values.
>
> As the consequence of absence of the special treatment of NOMAP regions in
> the memory map it was necessary to use memblock_is_map_memory() in
> pfn_valid() and to have pfn_valid_within() aliased to pfn_valid() so that
> generic mm functionality would not treat a NOMAP page as a normal page.
>
> Since the NOMAP regions are now marked as PageReserved(), pfn walkers and
> the rest of core mm will treat them as unusable memory and thus
> pfn_valid_within() is no longer required at all and can be disabled by
> removing CONFIG_HOLES_IN_ZONE on arm64.
But what about the memory map that are freed during boot (mentioned above).
Would not they still cause CONFIG_HOLES_IN_ZONE to be applicable and hence
pfn_valid_within() ?
>
> pfn_valid() can be slightly simplified by replacing
> memblock_is_map_memory() with memblock_is_memory().
Just to understand this better, pfn_valid() will now return true for all
MEMBLOCK_NOMAP based memory but that is okay as core MM would still ignore
them as unusable memory for being PageReserved().
>
> Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com>
> ---
> arch/arm64/Kconfig | 3 ---
> arch/arm64/mm/init.c | 4 ++--
> 2 files changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index e4e1b6550115..58e439046d05 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1040,9 +1040,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK
> def_bool y
> depends on NUMA
>
> -config HOLES_IN_ZONE
> - def_bool y
> -
> source "kernel/Kconfig.hz"
>
> config ARCH_SPARSEMEM_ENABLE
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 258b1905ed4a..bb6dd406b1f0 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -243,7 +243,7 @@ int pfn_valid(unsigned long pfn)
>
> /*
> * ZONE_DEVICE memory does not have the memblock entries.
> - * memblock_is_map_memory() check for ZONE_DEVICE based
> + * memblock_is_memory() check for ZONE_DEVICE based
> * addresses will always fail. Even the normal hotplugged
> * memory will never have MEMBLOCK_NOMAP flag set in their
> * memblock entries. Skip memblock search for all non early
> @@ -254,7 +254,7 @@ int pfn_valid(unsigned long pfn)
> return pfn_section_valid(ms, pfn);
> }
> #endif
> - return memblock_is_map_memory(addr);
> + return memblock_is_memory(addr);
> }
> EXPORT_SYMBOL(pfn_valid);
>
>
Powered by blists - more mailing lists