[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210422061902.21614-1-rppt@kernel.org>
Date: Thu, 22 Apr 2021 09:18:58 +0300
From: Mike Rapoport <rppt@...nel.org>
To: linux-arm-kernel@...ts.infradead.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
David Hildenbrand <david@...hat.com>,
Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mike Rapoport <rppt@...nel.org>,
Mike Rapoport <rppt@...ux.ibm.com>,
Will Deacon <will@...nel.org>, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH v3 0/4] arm64: drop pfn_valid_within() and simplify pfn_valid()
From: Mike Rapoport <rppt@...ux.ibm.com>
Hi,
These patches aim to remove CONFIG_HOLES_IN_ZONE and essentially hardwire
pfn_valid_within() to 1.
The idea is to mark NOMAP pages as reserved in the memory map and restore
the intended semantics of pfn_valid() to designate availability of struct
page for a pfn.
With this the core mm will be able to cope with the fact that it cannot use
NOMAP pages and the holes created by NOMAP ranges within MAX_ORDER blocks
will be treated correctly even without the need for pfn_valid_within.
The patches are only boot tested on qemu-system-aarch64 so I'd really
appreciate memory stress tests on real hardware.
If this actually works we'll be one step closer to drop custom pfn_valid()
on arm64 altogether.
v3:
* Fix minor issues found by Anshuman
* Freshen up the declaration of pfn_valid() to make it consistent with
pfn_is_map_memory()
* Add more Acked-by and Reviewed-by tags, thanks Anshuman and David
v2: Link: https://lore.kernel.org/lkml/20210421065108.1987-1-rppt@kernel.org
* Add check for PFN overflow in pfn_is_map_memory()
* Add Acked-by and Reviewed-by tags, thanks David.
v1: Link: https://lore.kernel.org/lkml/20210420090925.7457-1-rppt@kernel.org
* Add comment about the semantics of pfn_valid() as Anshuman suggested
* Extend comments about MEMBLOCK_NOMAP, per Anshuman
* Use pfn_is_map_memory() name for the exported wrapper for
memblock_is_map_memory(). It is still local to arch/arm64 in the end
because of header dependency issues.
rfc: Link: https://lore.kernel.org/lkml/20210407172607.8812-1-rppt@kernel.org
Mike Rapoport (4):
include/linux/mmzone.h: add documentation for pfn_valid()
memblock: update initialization of reserved pages
arm64: decouple check whether pfn is in linear map from pfn_valid()
arm64: drop pfn_valid_within() and simplify pfn_valid()
arch/arm64/Kconfig | 3 ---
arch/arm64/include/asm/memory.h | 2 +-
arch/arm64/include/asm/page.h | 3 ++-
arch/arm64/kvm/mmu.c | 2 +-
arch/arm64/mm/init.c | 16 ++++++++++++++--
arch/arm64/mm/ioremap.c | 4 ++--
arch/arm64/mm/mmu.c | 2 +-
include/linux/memblock.h | 4 +++-
include/linux/mmzone.h | 11 +++++++++++
mm/memblock.c | 28 ++++++++++++++++++++++++++--
10 files changed, 61 insertions(+), 14 deletions(-)
base-commit: e49d033bddf5b565044e2abe4241353959bc9120
--
2.28.0
Powered by blists - more mailing lists