[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190214101429.GD32494@hirez.programming.kicks-ass.net>
Date: Thu, 14 Feb 2019 11:14:29 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Andy Lutomirski <luto@...capital.net>
Cc: Julien Thierry <julien.thierry@....com>,
Will Deacon <will.deacon@....com>,
Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, mingo@...hat.com,
catalin.marinas@....com, james.morse@....com, hpa@...or.com,
valentin.schneider@....com, brgerst@...il.com, jpoimboe@...hat.com,
luto@...nel.org, bp@...en8.de, dvlasenk@...hat.com,
torvalds@...ux-foundation.org, tglx@...utronix.de
Subject: [PATCH] sched/x86: Save [ER]FLAGS on context switch
On Wed, Feb 13, 2019 at 02:49:47PM -0800, Andy Lutomirski wrote:
> Do we need to backport this thing?
Possibly, just to be safe.
> The problem can’t be too widespread or we would have heard of it before.
Yes, so far we've been lucky.
---
Subject: sched/x86: Save [ER]FLAGS on context switch
From: Peter Zijlstra <peterz@...radead.org>
Date: Thu Feb 14 10:30:52 CET 2019
Effectively revert commit:
2c7577a75837 ("sched/x86_64: Don't save flags on context switch")
Specifically because SMAP uses FLAGS.AC which invalidates the claim
that the kernel has clean flags.
In particular; while preemption from interrupt return is fine (the
IRET frame on the exception stack contains FLAGS) it breaks any code
that does synchonous scheduling, including preempt_enable().
This has become a significant issue ever since commit:
5b24a7a2aa20 ("Add 'unsafe' user access functions for batched accesses")
provided for means of having 'normal' C code between STAC / CLAC,
exposing the FLAGS.AC state. So far this hasn't led to trouble,
however fix it before it comes apart.
Fixes: 5b24a7a2aa20 ("Add 'unsafe' user access functions for batched accesses")
Acked-by: Andy Lutomirski <luto@...capital.net>
Reported-by: Julien Thierry <julien.thierry@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
arch/x86/entry/entry_32.S | 2 ++
arch/x86/entry/entry_64.S | 2 ++
arch/x86/include/asm/switch_to.h | 1 +
3 files changed, 5 insertions(+)
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -650,6 +650,7 @@ ENTRY(__switch_to_asm)
pushl %ebx
pushl %edi
pushl %esi
+ pushfl
/* switch stack */
movl %esp, TASK_threadsp(%eax)
@@ -672,6 +673,7 @@ ENTRY(__switch_to_asm)
#endif
/* restore callee-saved registers */
+ popfl
popl %esi
popl %edi
popl %ebx
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -291,6 +291,7 @@ ENTRY(__switch_to_asm)
pushq %r13
pushq %r14
pushq %r15
+ pushfq
/* switch stack */
movq %rsp, TASK_threadsp(%rdi)
@@ -313,6 +314,7 @@ ENTRY(__switch_to_asm)
#endif
/* restore callee-saved registers */
+ popfq
popq %r15
popq %r14
popq %r13
--- a/arch/x86/include/asm/switch_to.h
+++ b/arch/x86/include/asm/switch_to.h
@@ -40,6 +40,7 @@ asmlinkage void ret_from_fork(void);
* order of the fields must match the code in __switch_to_asm().
*/
struct inactive_task_frame {
+ unsigned long flags;
#ifdef CONFIG_X86_64
unsigned long r15;
unsigned long r14;
Powered by blists - more mailing lists