[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1572904606-27961-1-git-send-email-bhsharma@redhat.com>
Date: Tue, 5 Nov 2019 03:26:46 +0530
From: Bhupesh Sharma <bhsharma@...hat.com>
To: linux-arm-kernel@...ts.infradead.org
Cc: bhsharma@...hat.com, bhupesh.linux@...il.com,
James Morse <james.morse@....com>,
Mark Rutland <mark.rutland@....com>,
Will Deacon <will@...nel.org>,
Steve Capper <steve.capper@....com>,
Catalin Marinas <catalin.marinas@....com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
linux-kernel@...r.kernel.org, kexec@...ts.infradead.org
Subject: [PATCH] arm64: mm: Remove MAX_USER_VA_BITS definition
commit 9b31cf493ffa ("arm64: mm: Introduce MAX_USER_VA_BITS definition")
introduced the MAX_USER_VA_BITS definition, which was used to support
the arm64 mm use-cases where the user-space could use 52-bit virtual
addresses whereas the kernel-space would still could a maximum of 48-bit
virtual addressing.
But, now with commit b6d00d47e81a ("arm64: mm: Introduce 52-bit Kernel
VAs"), we removed the 52-bit user/48-bit kernel kconfig option and hence
there is no longer any scenario where user VA != kernel VA size
(even with CONFIG_ARM64_FORCE_52BIT enabled, the same is true).
Hence we can do away with the MAX_USER_VA_BITS macro as it is equal to
VA_BITS (maximum VA space size) in all possible use-cases. Note that
even though the 'vabits_actual' value would be 48 for arm64 hardware
which don't support LVA-8.2 extension (even when CONFIG_ARM64_VA_BITS_52
is enabled), VA_BITS would still be set to a value 52. Hence this change
would be safe in all possible VA address space combinations.
Cc: James Morse <james.morse@....com>
Cc: Mark Rutland <mark.rutland@....com>
Cc: Will Deacon <will@...nel.org>
Cc: Steve Capper <steve.capper@....com>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Ard Biesheuvel <ard.biesheuvel@...aro.org>
Cc: linux-kernel@...r.kernel.org
Cc: kexec@...ts.infradead.org
Signed-off-by: Bhupesh Sharma <bhsharma@...hat.com>
---
arch/arm64/include/asm/memory.h | 6 ------
arch/arm64/include/asm/pgtable-hwdef.h | 2 +-
arch/arm64/include/asm/processor.h | 2 +-
3 files changed, 2 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index c23c47360664..a4f9ca5479b0 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -69,12 +69,6 @@
#define KERNEL_START _text
#define KERNEL_END _end
-#ifdef CONFIG_ARM64_VA_BITS_52
-#define MAX_USER_VA_BITS 52
-#else
-#define MAX_USER_VA_BITS VA_BITS
-#endif
-
/*
* Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual
* address space for the shadow region respectively. They can bloat the stack
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index 3df60f97da1f..d9fbd433cc17 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -69,7 +69,7 @@
#define PGDIR_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - CONFIG_PGTABLE_LEVELS)
#define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
-#define PTRS_PER_PGD (1 << (MAX_USER_VA_BITS - PGDIR_SHIFT))
+#define PTRS_PER_PGD (1 << (VA_BITS - PGDIR_SHIFT))
/*
* Section address mask and size definitions.
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 5623685c7d13..586fcd4b1965 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -9,7 +9,7 @@
#define __ASM_PROCESSOR_H
#define KERNEL_DS UL(-1)
-#define USER_DS ((UL(1) << MAX_USER_VA_BITS) - 1)
+#define USER_DS ((UL(1) << VA_BITS) - 1)
/*
* On arm64 systems, unaligned accesses by the CPU are cheap, and so there is
--
2.7.4
Powered by blists - more mailing lists