[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181004140547.13014-6-bigeasy@linutronix.de>
Date: Thu, 4 Oct 2018 16:05:41 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: linux-kernel@...r.kernel.org
Cc: x86@...nel.org, Andy Lutomirski <luto@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
kvm@...r.kernel.org, "Jason A. Donenfeld" <Jason@...c4.com>,
Rik van Riel <riel@...riel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: [PATCH 05/11] x86/fpu: set PKRU state for kernel threads
The PKRU value is not set for kernel threads because they do not have
the ->initialized value set. As a result the kernel thread has a random
PKRU value set which it inherits from the previous task.
It has been suggested by Paolo Bonzini to set it for kernel threads, too
because it might be a fix.
I *think* this is not required because the kernel threads don't copy
data to/from userland and don't have access to any userspace mm in
general.
However there is this use_mm(). If we gain a mm by use_mm() we don't
have a matching PKRU value because those are per thread. It has been
suggested to use 0 as the PKRU value but this would bypass PKRU.
Set the initial (default) PKRU value for kernel threads.
Suggested-by: Paolo Bonzini <pbonzini@...hat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
arch/x86/include/asm/fpu/internal.h | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
index 956d967ca824a..4ecaf4d22954e 100644
--- a/arch/x86/include/asm/fpu/internal.h
+++ b/arch/x86/include/asm/fpu/internal.h
@@ -14,6 +14,7 @@
#include <linux/compat.h>
#include <linux/sched.h>
#include <linux/slab.h>
+#include <linux/pkeys.h>
#include <asm/user.h>
#include <asm/fpu/api.h>
@@ -573,20 +574,23 @@ static inline void switch_fpu_finish(struct fpu *new_fpu, int cpu)
bool load_fpu;
load_fpu = static_cpu_has(X86_FEATURE_FPU) && new_fpu->initialized;
- if (!load_fpu)
- return;
-
- __fpregs_load_activate(new_fpu, cpu);
-
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
if (static_cpu_has(X86_FEATURE_OSPKE)) {
struct pkru_state *pk;
- pk = __raw_xsave_addr(&new_fpu->state.xsave, XFEATURE_PKRU);
- if (pk->pkru != __read_pkru())
- __write_pkru(pk->pkru);
+ if (!load_fpu) {
+ pkru_set_init_value();
+ } else {
+ pk = __raw_xsave_addr(&new_fpu->state.xsave,
+ XFEATURE_PKRU);
+ if (pk->pkru != __read_pkru())
+ __write_pkru(pk->pkru);
+ }
}
#endif
+ if (!load_fpu)
+ return;
+ __fpregs_load_activate(new_fpu, cpu);
}
/*
--
2.19.0
Powered by blists - more mailing lists