[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-1bdb67e5aa2d5d43c48cb7d93393fcba276c9e71@git.kernel.org>
Date: Wed, 17 Apr 2019 07:15:16 -0700
From: tip-bot for Thomas Gleixner <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: sean.j.christopherson@...el.com, mingo@...hat.com, x86@...nel.org,
mingo@...nel.org, bp@...e.de, linux-kernel@...r.kernel.org,
tglx@...utronix.de, luto@...nel.org, hpa@...or.com,
jpoimboe@...hat.com
Subject: [tip:x86/irq] x86/exceptions: Enable IST guard pages
Commit-ID: 1bdb67e5aa2d5d43c48cb7d93393fcba276c9e71
Gitweb: https://git.kernel.org/tip/1bdb67e5aa2d5d43c48cb7d93393fcba276c9e71
Author: Thomas Gleixner <tglx@...utronix.de>
AuthorDate: Sun, 14 Apr 2019 17:59:56 +0200
Committer: Borislav Petkov <bp@...e.de>
CommitDate: Wed, 17 Apr 2019 15:05:32 +0200
x86/exceptions: Enable IST guard pages
All usage sites which expected that the exception stacks in the CPU entry
area are mapped linearly are fixed up. Enable guard pages between the
IST stacks.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Borislav Petkov <bp@...e.de>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: x86-ml <x86@...nel.org>
Link: https://lkml.kernel.org/r/20190414160145.349862042@linutronix.de
---
arch/x86/include/asm/cpu_entry_area.h | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h
index 310eeb62d418..9c96406e6d2b 100644
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -26,13 +26,9 @@ struct exception_stacks {
ESTACKS_MEMBERS(0)
};
-/*
- * The effective cpu entry area mapping with guard pages. Guard size is
- * zero until the code which makes assumptions about linear mappings is
- * cleaned up.
- */
+/* The effective cpu entry area mapping with guard pages. */
struct cea_exception_stacks {
- ESTACKS_MEMBERS(0)
+ ESTACKS_MEMBERS(PAGE_SIZE)
};
/*
Powered by blists - more mailing lists