[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1606098529-7907-3-git-send-email-anshuman.khandual@arm.com>
Date: Mon, 23 Nov 2020 07:58:48 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: linux-mm@...ck.org, akpm@...ux-foundation.org, david@...hat.com
Cc: Anshuman Khandual <anshuman.khandual@....com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>,
Mark Rutland <mark.rutland@....com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: [RFC 2/3] arm64/mm: Define arch_get_addressable_range()
This overrides arch_get_addressable_range() on arm64 platform which will be
used with recently added generic framework. It drops inside_linear_region()
and subsequent check in arch_add_memory() which are no longer required.
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Will Deacon <will@...nel.org>
Cc: Ard Biesheuvel <ardb@...nel.org>
Cc: Mark Rutland <mark.rutland@....com>
Cc: David Hildenbrand <david@...hat.com>
Cc: linux-arm-kernel@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@....com>
---
arch/arm64/include/asm/memory.h | 3 +++
arch/arm64/mm/mmu.c | 19 +++++++++++--------
2 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index cd61239bae8c..0ef7948eb58c 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -328,6 +328,9 @@ static inline void *phys_to_virt(phys_addr_t x)
})
void dump_mem_limit(void);
+
+#define arch_get_addressable_range arch_get_addressable_range
+struct range arch_get_addressable_range(bool need_mapping);
#endif /* !ASSEMBLY */
/*
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ca692a815731..a6433caf337f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1444,16 +1444,24 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
free_empty_tables(start, end, PAGE_OFFSET, PAGE_END);
}
-static bool inside_linear_region(u64 start, u64 size)
+struct range arch_get_addressable_range(bool need_mapping)
{
+ struct range memhp_range;
+
/*
* Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
* accommodating both its ends but excluding PAGE_END. Max physical
* range which can be mapped inside this linear mapping range, must
* also be derived from its end points.
*/
- return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
- (start + size - 1) <= __pa(PAGE_END - 1);
+ if (need_mapping) {
+ memhp_range.start = __pa(_PAGE_OFFSET(vabits_actual));
+ memhp_range.end = __pa(PAGE_END - 1);
+ } else {
+ memhp_range.start = 0;
+ memhp_range.end = (1ULL << (MAX_PHYSMEM_BITS + 1)) - 1;
+ }
+ return memhp_range;
}
int arch_add_memory(int nid, u64 start, u64 size,
@@ -1461,11 +1469,6 @@ int arch_add_memory(int nid, u64 start, u64 size,
{
int ret, flags = 0;
- if (!inside_linear_region(start, size)) {
- pr_err("[%llx %llx] is outside linear mapping region\n", start, start + size);
- return -EINVAL;
- }
-
if (rodata_full || debug_pagealloc_enabled())
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
--
2.20.1
Powered by blists - more mailing lists