lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 26 Oct 2017 14:00:07 -0400 From: Brian Gerst <brgerst@...il.com> To: Andy Lutomirski <luto@...nel.org> Cc: X86 ML <x86@...nel.org>, Borislav Petkov <bpetkov@...e.de>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Dave Hansen <dave.hansen@...el.com>, Linus Torvalds <torvalds@...ux-foundation.org> Subject: Re: [PATCH 10/18] x86/asm/32: Pull MSR_IA32_SYSENTER_CS update code out of native_load_sp0() On Thu, Oct 26, 2017 at 4:26 AM, Andy Lutomirski <luto@...nel.org> wrote: > This causees the MSR_IA32_SYSENTER_CS write to move out of the > paravirt hook. This shouldn't affect Xen PV: Xen already ignores > MSR_IA32_SYSENTER_ESP writes. In any event, Xen doesn't support > vm86() in a useful way. > > Note to any potential backporters: This patch won't break lguest, as > lguest didn't have any SYSENTER support at all. > > Signed-off-by: Andy Lutomirski <luto@...nel.org> > --- > arch/x86/include/asm/processor.h | 7 ------- > arch/x86/include/asm/switch_to.h | 11 +++++++++++ > arch/x86/kernel/process_32.c | 1 + > arch/x86/kernel/process_64.c | 2 +- > arch/x86/kernel/vm86_32.c | 6 +++++- > 5 files changed, 18 insertions(+), 9 deletions(-) > > diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h > index b390ff76e58f..0167e3e35a57 100644 > --- a/arch/x86/include/asm/processor.h > +++ b/arch/x86/include/asm/processor.h > @@ -520,13 +520,6 @@ static inline void > native_load_sp0(struct tss_struct *tss, struct thread_struct *thread) > { > tss->x86_tss.sp0 = thread->sp0; > -#ifdef CONFIG_X86_32 > - /* Only happens when SEP is enabled, no need to test "SEP"arately: */ > - if (unlikely(tss->x86_tss.ss1 != thread->sysenter_cs)) { > - tss->x86_tss.ss1 = thread->sysenter_cs; > - wrmsr(MSR_IA32_SYSENTER_CS, thread->sysenter_cs, 0); > - } > -#endif > } > > static inline void native_swapgs(void) > diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h > index fcc5cd387fd1..f3fa19925ae1 100644 > --- a/arch/x86/include/asm/switch_to.h > +++ b/arch/x86/include/asm/switch_to.h > @@ -72,4 +72,15 @@ do { \ > ((last) = __switch_to_asm((prev), (next))); \ > } while (0) > > +#ifdef CONFIG_X86_32 > +static inline void refresh_sysenter_cs(struct thread_struct *thread) > +{ > + /* Only happens when SEP is enabled, no need to test "SEP"arately: */ > + if (unlikely(this_cpu_read(cpu_tss.x86_tss.ss1) == thread->sysenter_cs)) > + return; > + > + this_cpu_write(cpu_tss.x86_tss.ss1, thread->sysenter_cs); > +} > +#endif > + > #endif /* _ASM_X86_SWITCH_TO_H */ > diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c > index 11966251cd42..84d6c9f554d0 100644 > --- a/arch/x86/kernel/process_32.c > +++ b/arch/x86/kernel/process_32.c > @@ -287,6 +287,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) > * current_thread_info(). > */ > load_sp0(tss, next); > + refresh_sysenter_cs(next); /* in case prev or next is vm86 */ > this_cpu_write(cpu_current_top_of_stack, > (unsigned long)task_stack_page(next_p) + > THREAD_SIZE); > diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c > index 302e7b2572d1..a6ff6d1a0110 100644 > --- a/arch/x86/kernel/process_64.c > +++ b/arch/x86/kernel/process_64.c > @@ -464,7 +464,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) > */ > this_cpu_write(current_task, next_p); > > - /* Reload esp0 and ss1. This changes current_thread_info(). */ > + /* Reload sp0. */ > load_sp0(tss, next); > > /* > diff --git a/arch/x86/kernel/vm86_32.c b/arch/x86/kernel/vm86_32.c > index 7924a5356c8a..5bc1c3ab6287 100644 > --- a/arch/x86/kernel/vm86_32.c > +++ b/arch/x86/kernel/vm86_32.c > @@ -54,6 +54,7 @@ > #include <asm/irq.h> > #include <asm/traps.h> > #include <asm/vm86.h> > +#include <asm/switch_to.h> > > /* > * Known problems: > @@ -149,6 +150,7 @@ void save_v86_state(struct kernel_vm86_regs *regs, int retval) > tsk->thread.sp0 = vm86->saved_sp0; > tsk->thread.sysenter_cs = __KERNEL_CS; > load_sp0(tss, &tsk->thread); > + refresh_sysenter_cs(&tsk->thread); > vm86->saved_sp0 = 0; > put_cpu(); > > @@ -368,8 +370,10 @@ static long do_sys_vm86(struct vm86plus_struct __user *user_vm86, bool plus) > /* make room for real-mode segments */ > tsk->thread.sp0 += 16; > > - if (static_cpu_has(X86_FEATURE_SEP)) > + if (static_cpu_has(X86_FEATURE_SEP)) { > tsk->thread.sysenter_cs = 0; > + refresh_sysenter_cs(&tsk->thread); > + } > > load_sp0(tss, &tsk->thread); > put_cpu(); The MSR update is missing in the new version. -- Brian Gerst
Powered by blists - more mailing lists