[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190509090131.GA130570@gmail.com>
Date: Thu, 9 May 2019 11:01:31 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Yury Norov <yury.norov@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, Andi Kleen <ak@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Dave Hansen <dave.hansen@...el.com>,
Yury Norov <ynorov@...vell.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT
definitions a bit
* Yury Norov <yury.norov@...il.com> wrote:
> __VIRTUAL_MASK_SHIFT is defined twice to the same valie in
> arch/x86/include/asm/page_32_types.h. Fix it.
>
> Signed-off-by: Yury Norov <ynorov@...vell.com>
> ---
> arch/x86/include/asm/page_32_types.h | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
> index 0d5c739eebd7..9bfac5c80d89 100644
> --- a/arch/x86/include/asm/page_32_types.h
> +++ b/arch/x86/include/asm/page_32_types.h
> @@ -28,6 +28,8 @@
> #define MCE_STACK 0
> #define N_EXCEPTION_STACKS 1
>
> +#define __VIRTUAL_MASK_SHIFT 32
> +
> #ifdef CONFIG_X86_PAE
> /*
> * This is beyond the 44 bit limit imposed by the 32bit long pfns,
> @@ -36,11 +38,8 @@
> * The real limit is still 44 bits.
> */
> #define __PHYSICAL_MASK_SHIFT 52
> -#define __VIRTUAL_MASK_SHIFT 32
> -
> #else /* !CONFIG_X86_PAE */
> #define __PHYSICAL_MASK_SHIFT 32
> -#define __VIRTUAL_MASK_SHIFT 32
> #endif /* CONFIG_X86_PAE */
I think it's clearer to see them defined where the physical mask shift is
defined.
How about the patch below? It does away with the weird formatting and
cleans up both the comments and the style of the definition:
/*
* 52 bits on PAE is beyond the 44-bit limit imposed by the
* 32-bit long PFNs, but we need the full mask to make sure
* inverted PROT_NONE entries have all the host bits set
* in a guest. The real limit is still 44 bits.
*/
#ifdef CONFIG_X86_PAE
# define __PHYSICAL_MASK_SHIFT 52
# define __VIRTUAL_MASK_SHIFT 32
#else
# define __PHYSICAL_MASK_SHIFT 32
# define __VIRTUAL_MASK_SHIFT 32
#endif
?
Thanks,
Ingo
===============>
From: Ingo Molnar <mingo@...nel.org>
Date: Thu, 9 May 2019 10:59:44 +0200
Subject: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/include/asm/page_32_types.h | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
index 565ad755c785..009e96d4b6d4 100644
--- a/arch/x86/include/asm/page_32_types.h
+++ b/arch/x86/include/asm/page_32_types.h
@@ -26,20 +26,19 @@
#define N_EXCEPTION_STACKS 1
-#ifdef CONFIG_X86_PAE
/*
- * This is beyond the 44 bit limit imposed by the 32bit long pfns,
- * but we need the full mask to make sure inverted PROT_NONE
- * entries have all the host bits set in a guest.
- * The real limit is still 44 bits.
+ * 52 bits on PAE is beyond the 44-bit limit imposed by the
+ * 32-bit long PFNs, but we need the full mask to make sure
+ * inverted PROT_NONE entries have all the host bits set
+ * in a guest. The real limit is still 44 bits.
*/
-#define __PHYSICAL_MASK_SHIFT 52
-#define __VIRTUAL_MASK_SHIFT 32
-
-#else /* !CONFIG_X86_PAE */
-#define __PHYSICAL_MASK_SHIFT 32
-#define __VIRTUAL_MASK_SHIFT 32
-#endif /* CONFIG_X86_PAE */
+#ifdef CONFIG_X86_PAE
+# define __PHYSICAL_MASK_SHIFT 52
+# define __VIRTUAL_MASK_SHIFT 32
+#else
+# define __PHYSICAL_MASK_SHIFT 32
+# define __VIRTUAL_MASK_SHIFT 32
+#endif
/*
* Kernel image size is limited to 512 MB (see in arch/x86/kernel/head_32.S)
Powered by blists - more mailing lists