[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210511100550.28178-1-rppt@kernel.org>
Date: Tue, 11 May 2021 13:05:46 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Anshuman Khandual <anshuman.khandual@....com>,
Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
David Hildenbrand <david@...hat.com>,
Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mike Rapoport <rppt@...nel.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Will Deacon <will@...nel.org>, kvmarm@...ts.cs.columbia.edu,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [PATCH v4 0/4] arm64: drop pfn_valid_within() and simplify pfn_valid()
From: Mike Rapoport <rppt@...ux.ibm.com>
Hi,
These patches aim to remove CONFIG_HOLES_IN_ZONE and essentially hardwire
pfn_valid_within() to 1.
The idea is to mark NOMAP pages as reserved in the memory map and restore
the intended semantics of pfn_valid() to designate availability of struct
page for a pfn.
With this the core mm will be able to cope with the fact that it cannot use
NOMAP pages and the holes created by NOMAP ranges within MAX_ORDER blocks
will be treated correctly even without the need for pfn_valid_within.
The patches are boot tested on qemu-system-aarch64.
I beleive it would be best to route these via mmotm tree.
v4:
* rebase on v5.13-rc1
v3: Link: https://lore.kernel.org/lkml/20210422061902.21614-1-rppt@kernel.org
* Fix minor issues found by Anshuman
* Freshen up the declaration of pfn_valid() to make it consistent with
pfn_is_map_memory()
* Add more Acked-by and Reviewed-by tags, thanks Anshuman and David
v2: Link: https://lore.kernel.org/lkml/20210421065108.1987-1-rppt@kernel.org
* Add check for PFN overflow in pfn_is_map_memory()
* Add Acked-by and Reviewed-by tags, thanks David.
v1: Link: https://lore.kernel.org/lkml/20210420090925.7457-1-rppt@kernel.org
* Add comment about the semantics of pfn_valid() as Anshuman suggested
* Extend comments about MEMBLOCK_NOMAP, per Anshuman
* Use pfn_is_map_memory() name for the exported wrapper for
memblock_is_map_memory(). It is still local to arch/arm64 in the end
because of header dependency issues.
rfc: Link: https://lore.kernel.org/lkml/20210407172607.8812-1-rppt@kernel.org
Mike Rapoport (4):
include/linux/mmzone.h: add documentation for pfn_valid()
memblock: update initialization of reserved pages
arm64: decouple check whether pfn is in linear map from pfn_valid()
arm64: drop pfn_valid_within() and simplify pfn_valid()
arch/arm64/Kconfig | 3 ---
arch/arm64/include/asm/memory.h | 2 +-
arch/arm64/include/asm/page.h | 3 ++-
arch/arm64/kvm/mmu.c | 2 +-
arch/arm64/mm/init.c | 14 +++++++++++++-
arch/arm64/mm/ioremap.c | 4 ++--
arch/arm64/mm/mmu.c | 2 +-
include/linux/memblock.h | 4 +++-
include/linux/mmzone.h | 11 +++++++++++
mm/memblock.c | 28 ++++++++++++++++++++++++++--
10 files changed, 60 insertions(+), 13 deletions(-)
base-commit: 6efb943b8616ec53a5e444193dccf1af9ad627b5
--
2.28.0
Powered by blists - more mailing lists