[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200423232207.5797-12-sashal@kernel.org>
Date: Thu, 23 Apr 2020 19:22:00 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, tglx@...utronix.de, bp@...en8.de,
luto@...nel.org
Cc: hpa@...or.com, dave.hansen@...el.com, tony.luck@...el.com,
ak@...ux.intel.com, ravi.v.shankar@...el.com,
chang.seok.bae@...el.com, Sasha Levin <sashal@...nel.org>
Subject: [PATCH v10 11/18] x86/fsgsbase/64: Use FSGSBASE in switch_to() if available
From: Andy Lutomirski <luto@...nel.org>
With the new FSGSBASE instructions, FS/GS base can be efficiently read
and written in __switch_to(). Use that capability to preserve the full
state.
This will enable user code to do whatever it wants with the new
instructions without any kernel-induced gotchas. (There can still be
architectural gotchas: movl %gs,%eax; movl %eax,%gs may change GS base
if WRGSBASE was used, but users are expected to read the CPU manual
before doing things like that.)
This is a considerable speedup. It seems to save about 100 cycles per
context switch compared to the baseline 4.6-rc1 behavior on a Skylake
laptop.
[ chang: 5~10% performance improvements were seen by a context switch
benchmark that ran threads with different FS/GS base values (to the
baseline 4.16). ]
Signed-off-by: Andy Lutomirski <luto@...nel.org>
Signed-off-by: Chang S. Bae <chang.seok.bae@...el.com>
Reviewed-by: Tony Luck <tony.luck@...el.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: Tony Luck <tony.luck@...el.com>
Cc: Andi Kleen <ak@...ux.intel.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
arch/x86/kernel/process_64.c | 34 ++++++++++++++++++++++++++++------
1 file changed, 28 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index aaa65f284b9b9..e066750be89a0 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -199,8 +199,18 @@ static __always_inline void save_fsgs(struct task_struct *task)
{
savesegment(fs, task->thread.fsindex);
savesegment(gs, task->thread.gsindex);
- save_base_legacy(task, task->thread.fsindex, FS);
- save_base_legacy(task, task->thread.gsindex, GS);
+ if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
+ /*
+ * If FSGSBASE is enabled, we can't make any useful guesses
+ * about the base, and user code expects us to save the current
+ * value. Fortunately, reading the base directly is efficient.
+ */
+ task->thread.fsbase = rdfsbase();
+ task->thread.gsbase = x86_gsbase_read_cpu_inactive();
+ } else {
+ save_base_legacy(task, task->thread.fsindex, FS);
+ save_base_legacy(task, task->thread.gsindex, GS);
+ }
}
#if IS_ENABLED(CONFIG_KVM)
@@ -279,10 +289,22 @@ static __always_inline void load_seg_legacy(unsigned short prev_index,
static __always_inline void x86_fsgsbase_load(struct thread_struct *prev,
struct thread_struct *next)
{
- load_seg_legacy(prev->fsindex, prev->fsbase,
- next->fsindex, next->fsbase, FS);
- load_seg_legacy(prev->gsindex, prev->gsbase,
- next->gsindex, next->gsbase, GS);
+ if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
+ /* Update the FS and GS selectors if they could have changed. */
+ if (unlikely(prev->fsindex || next->fsindex))
+ loadseg(FS, next->fsindex);
+ if (unlikely(prev->gsindex || next->gsindex))
+ loadseg(GS, next->gsindex);
+
+ /* Update the bases. */
+ wrfsbase(next->fsbase);
+ x86_gsbase_write_cpu_inactive(next->gsbase);
+ } else {
+ load_seg_legacy(prev->fsindex, prev->fsbase,
+ next->fsindex, next->fsbase, FS);
+ load_seg_legacy(prev->gsindex, prev->gsbase,
+ next->gsindex, next->gsbase, GS);
+ }
}
static unsigned long x86_fsgsbase_read_task(struct task_struct *task,
--
2.20.1
Powered by blists - more mailing lists