[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220127175505.851391-15-ira.weiny@intel.com>
Date: Thu, 27 Jan 2022 09:54:35 -0800
From: ira.weiny@...el.com
To: Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
Dan Williams <dan.j.williams@...el.com>
Cc: Ira Weiny <ira.weiny@...el.com>, Fenghua Yu <fenghua.yu@...el.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH V8 14/44] x86/pkeys: Introduce pks_write_pkrs()
From: Ira Weiny <ira.weiny@...el.com>
Writing to MSR's is inefficient. Even though the underlying
WRMSR(MSR_IA32_PKRS) is not serializing (see below), writing to the MSR
unnecessarily should be avoided. This is especially true when the value
of the PKS protections is unlikely to change from the default often.
Introduce pks_write_pkrs() which avoids writing the MSR if the pkrs
value has not changed for the CPU. Do this by utilizing a per-cpu
cache. Protect the use of the cached value from preemption by
restricting the use of pks_write_pkrs() to non-preemptable context.
Further restrict it's use to callers which have checked X86_FEATURE_PKS.
The initial value of the MSR is preserved on INIT. While unlikely, the
PKS_INIT_VALUE may be 0 someday which would prevent pks_write_pkrs()
from updating the MSR. Keep the MSR write in pks_setup() to ensure the
MSR is initialized at least one time. Then call pks_write_pkrs() to set
up the per-cache value to ensure it is in sync with the MSR.
It should be noted that the underlying WRMSR(MSR_IA32_PKRS) is not
serializing but still maintains ordering properties similar to WRPKRU.
The current SDM section on PKRS needs updating but should be the same as
that of WRPKRU. So to quote from the WRPKRU text:
WRPKRU will never execute transiently. Memory accesses affected
by PKRU register will not execute (even transiently) until all
prior executions of WRPKRU have completed execution and updated
the PKRU register.
Suggested-by: Dave Hansen <dave.hansen@...el.com>
Signed-off-by: Ira Weiny <ira.weiny@...el.com>
---
Changes for V8
From Thomas
Remove get/put_cpu_ptr() and make this a 'lower level
call. This makes it preemption unsafe but it is called
mostly where preemption is already disabled. Add this
as a predicate of the call and those calls which need to
can disable preemption.
Add lockdep assert for preemption
Ensure MSR gets written even if the PKS_INIT_VALUE is 0.
Completely re-write the commit message.
s/write_pkrs/pks_write_pkrs/
Split this off into a singular patch
Changes for V7
Create a dynamic pkrs_initial_value in early init code.
Clean up comments
Add comment to macro guard
---
arch/x86/mm/pkeys.c | 41 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index a5b5b86e97ce..3dce99ef4127 100644
--- a/arch/x86/mm/pkeys.c
+++ b/arch/x86/mm/pkeys.c
@@ -209,15 +209,56 @@ u32 pkey_update_pkval(u32 pkval, int pkey, u32 accessbits)
#ifdef CONFIG_ARCH_ENABLE_SUPERVISOR_PKEYS
+static DEFINE_PER_CPU(u32, pkrs_cache);
+
+/*
+ * pks_write_pkrs() - Write the pkrs of the current CPU
+ * @new_pkrs: New value to write to the current CPU register
+ *
+ * Optimizes the MSR writes by maintaining a per cpu cache.
+ *
+ * Context: must be called with preemption disabled
+ * Context: must only be called if PKS is enabled
+ *
+ * It should also be noted that the underlying WRMSR(MSR_IA32_PKRS) is not
+ * serializing but still maintains ordering properties similar to WRPKRU.
+ * The current SDM section on PKRS needs updating but should be the same as
+ * that of WRPKRU. Quote from the WRPKRU text:
+ *
+ * WRPKRU will never execute transiently. Memory accesses
+ * affected by PKRU register will not execute (even transiently)
+ * until all prior executions of WRPKRU have completed execution
+ * and updated the PKRU register.
+ */
+static inline void pks_write_pkrs(u32 new_pkrs)
+{
+ u32 pkrs = __this_cpu_read(pkrs_cache);
+
+ lockdep_assert_preemption_disabled();
+
+ if (pkrs != new_pkrs) {
+ __this_cpu_write(pkrs_cache, new_pkrs);
+ wrmsrl(MSR_IA32_PKRS, new_pkrs);
+ }
+}
+
/*
* PKS is independent of PKU and either or both may be supported on a CPU.
+ *
+ * Context: must be called with preemption disabled
*/
void pks_setup(void)
{
if (!cpu_feature_enabled(X86_FEATURE_PKS))
return;
+ /*
+ * If the PKS_INIT_VALUE is 0 then pks_write_pkrs() could fail to
+ * initialize the MSR. Do a single write here to ensure the MSR is
+ * written at least one time.
+ */
wrmsrl(MSR_IA32_PKRS, PKS_INIT_VALUE);
+ pks_write_pkrs(PKS_INIT_VALUE);
cr4_set_bits(X86_CR4_PKS);
}
--
2.31.1
Powered by blists - more mailing lists