[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171108194659.71951AEE@viggo.jf.intel.com>
Date: Wed, 08 Nov 2017 11:46:59 -0800
From: Dave Hansen <dave.hansen@...ux.intel.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, dave.hansen@...ux.intel.com,
moritz.lipp@...k.tugraz.at, daniel.gruss@...k.tugraz.at,
michael.schwarz@...k.tugraz.at, richard.fellner@...dent.tugraz.at,
luto@...nel.org, torvalds@...ux-foundation.org,
keescook@...gle.com, hughd@...gle.com, x86@...nel.org
Subject: [PATCH 07/30] x86, kaiser: mark percpu data structures required for entry/exit
These patches are based on work from a team at Graz University of
Technology posted here: https://github.com/IAIK/KAISER
The KAISER approach keeps two copies of the page tables: one for running
in the kernel and one for running userspace. But, there are a few
structures that are needed for switching in and out of the kernel and
a good subset of *those* are per-cpu data.
Here's a short summary of the things we map to userspace:
* The gdt_page's virtual address is pointed to by the LGDT instruction.
It is needed to define the segments. Deeply required by CPU to run.
* cpu_tss tells the CPU, among other things, where the new stacks are
after user<->kernel transitions. Needed by the CPU to make ring
transitions.
* exception_stacks are needed at interrupt and exception entry so
that we have a place to store data to, among other things, get
a free register to load the kernel CR3.
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Moritz Lipp <moritz.lipp@...k.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@...k.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@...k.tugraz.at>
Cc: Richard Fellner <richard.fellner@...dent.tugraz.at>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Kees Cook <keescook@...gle.com>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: x86@...nel.org
---
b/arch/x86/include/asm/desc.h | 2 +-
b/arch/x86/include/asm/processor.h | 2 +-
b/arch/x86/kernel/cpu/common.c | 4 ++--
b/arch/x86/kernel/process.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
diff -puN arch/x86/include/asm/desc.h~kaiser-prep-x86-percpu-user-mapped arch/x86/include/asm/desc.h
--- a/arch/x86/include/asm/desc.h~kaiser-prep-x86-percpu-user-mapped 2017-11-08 10:45:29.252681395 -0800
+++ b/arch/x86/include/asm/desc.h 2017-11-08 10:45:29.261681395 -0800
@@ -45,7 +45,7 @@ struct gdt_page {
struct desc_struct gdt[GDT_ENTRIES];
} __attribute__((aligned(PAGE_SIZE)));
-DECLARE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page);
+DECLARE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page);
/* Provide the original GDT */
static inline struct desc_struct *get_cpu_gdt_rw(unsigned int cpu)
diff -puN arch/x86/include/asm/processor.h~kaiser-prep-x86-percpu-user-mapped arch/x86/include/asm/processor.h
--- a/arch/x86/include/asm/processor.h~kaiser-prep-x86-percpu-user-mapped 2017-11-08 10:45:29.254681395 -0800
+++ b/arch/x86/include/asm/processor.h 2017-11-08 10:45:29.261681395 -0800
@@ -346,7 +346,7 @@ struct tss_struct {
unsigned long SYSENTER_stack[64];
} ____cacheline_aligned;
-DECLARE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss);
+DECLARE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss);
/*
* sizeof(unsigned long) coming from an extra "long" at the end
diff -puN arch/x86/kernel/cpu/common.c~kaiser-prep-x86-percpu-user-mapped arch/x86/kernel/cpu/common.c
--- a/arch/x86/kernel/cpu/common.c~kaiser-prep-x86-percpu-user-mapped 2017-11-08 10:45:29.256681395 -0800
+++ b/arch/x86/kernel/cpu/common.c 2017-11-08 10:45:29.262681395 -0800
@@ -98,7 +98,7 @@ static const struct cpu_dev default_cpu
static const struct cpu_dev *this_cpu = &default_cpu;
-DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = {
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(struct gdt_page, gdt_page) = { .gdt = {
#ifdef CONFIG_X86_64
/*
* We need valid kernel segments for data and code in long mode too
@@ -1343,7 +1343,7 @@ static const unsigned int exception_stac
[DEBUG_STACK - 1] = DEBUG_STKSZ
};
-static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
+DEFINE_PER_CPU_PAGE_ALIGNED_USER_MAPPED(char, exception_stacks
[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]);
/* May not be marked __init: used by software suspend */
diff -puN arch/x86/kernel/process.c~kaiser-prep-x86-percpu-user-mapped arch/x86/kernel/process.c
--- a/arch/x86/kernel/process.c~kaiser-prep-x86-percpu-user-mapped 2017-11-08 10:45:29.257681395 -0800
+++ b/arch/x86/kernel/process.c 2017-11-08 10:45:29.262681395 -0800
@@ -46,7 +46,7 @@
* section. Since TSS's are completely CPU-local, we want them
* on exact cacheline boundaries, to eliminate cacheline ping-pong.
*/
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = {
+__visible DEFINE_PER_CPU_SHARED_ALIGNED_USER_MAPPED(struct tss_struct, cpu_tss) = {
.x86_tss = {
/*
* .sp0 is only used when entering ring 0 from a lower
_
Powered by blists - more mailing lists