[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240329015346.635933-5-chang.seok.bae@intel.com>
Date: Thu, 28 Mar 2024 18:53:36 -0700
From: "Chang S. Bae" <chang.seok.bae@...el.com>
To: linux-kernel@...r.kernel.org,
linux-crypto@...r.kernel.org,
dm-devel@...hat.com
Cc: ebiggers@...nel.org,
luto@...nel.org,
dave.hansen@...ux.intel.com,
tglx@...utronix.de,
bp@...en8.de,
mingo@...nel.org,
x86@...nel.org,
herbert@...dor.apana.org.au,
ardb@...nel.org,
elliott@....com,
dan.j.williams@...el.com,
bernie.keany@...el.com,
charishma1.gairuboyina@...el.com,
chang.seok.bae@...el.com
Subject: [PATCH v9 04/14] x86/asm: Add a wrapper function for the LOADIWKEY instruction
Key Locker introduces a CPU-internal wrapping key to encode a user key
to a key handle. Then a key handle is referenced instead of the plain
text key.
LOADIWKEY loads a wrapping key in the software-inaccessible CPU state.
It operates only in kernel mode.
The kernel will use this to load a new key at boot time. Establish an
accessor for the feature setup, and define struct iwkey to pass a key
value.
Signed-off-by: Chang S. Bae <chang.seok.bae@...el.com>
Reviewed-by: Dan Williams <dan.j.williams@...el.com>
---
Changes from v6:
* Massage the changelog -- clarify the reason and the changes a bit.
Changes from v5:
* Fix a typo: kernel_cpu_begin() -> kernel_fpu_begin()
Changes from RFC v2:
* Separate out the code as a new patch.
* Improve the usability with the new struct as an argument. (Dan
Williams)
Previously, Dan questioned the necessity of 'WARN_ON(!irq_fpu_usable())'
in the load_xmm_iwkey() function. However, it's worth noting that the
function comment emphasizes the caller's responsibility for invoking
kernel_fpu_begin(), which effectively performs the sanity check through
kernel_fpu_begin_mask().
---
arch/x86/include/asm/keylocker.h | 25 +++++++++++++++++++++++++
arch/x86/include/asm/special_insns.h | 28 ++++++++++++++++++++++++++++
2 files changed, 53 insertions(+)
create mode 100644 arch/x86/include/asm/keylocker.h
diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h
new file mode 100644
index 000000000000..4e731f577c50
--- /dev/null
+++ b/arch/x86/include/asm/keylocker.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_KEYLOCKER_H
+#define _ASM_KEYLOCKER_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/fpu/types.h>
+
+/**
+ * struct iwkey - A temporary wrapping key storage.
+ * @integrity_key: A 128-bit key used to verify the integrity of
+ * key handles
+ * @encryption_key: A 256-bit encryption key used for wrapping and
+ * unwrapping clear text keys.
+ *
+ * This storage should be flushed immediately after being loaded.
+ */
+struct iwkey {
+ struct reg_128_bit integrity_key;
+ struct reg_128_bit encryption_key[2];
+};
+
+#endif /*__ASSEMBLY__ */
+#endif /* _ASM_KEYLOCKER_H */
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 2e9fc5c400cd..65267013f1e1 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -9,6 +9,7 @@
#include <linux/errno.h>
#include <linux/irqflags.h>
#include <linux/jump_label.h>
+#include <asm/keylocker.h>
/*
* The compiler should not reorder volatile asm statements with respect to each
@@ -301,6 +302,33 @@ static __always_inline void tile_release(void)
asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0");
}
+/**
+ * load_xmm_iwkey - Load a CPU-internal wrapping key into XMM registers.
+ * @key: A pointer to a struct iwkey containing the key data.
+ *
+ * The caller is responsible for invoking kernel_fpu_begin() before.
+ */
+static inline void load_xmm_iwkey(struct iwkey *key)
+{
+ struct reg_128_bit zeros = { 0 };
+
+ asm volatile ("movdqu %0, %%xmm0; movdqu %1, %%xmm1; movdqu %2, %%xmm2;"
+ :: "m"(key->integrity_key), "m"(key->encryption_key[0]),
+ "m"(key->encryption_key[1]));
+
+ /*
+ * 'LOADIWKEY %xmm1,%xmm2' loads a key from XMM0-2 into a
+ * software-invisible CPU state. With zero in EAX, CPU does not
+ * perform hardware randomization and allows key backup.
+ *
+ * This instruction is supported by binutils >= 2.36.
+ */
+ asm volatile (".byte 0xf3,0x0f,0x38,0xdc,0xd1" :: "a"(0));
+
+ asm volatile ("movdqu %0, %%xmm0; movdqu %0, %%xmm1; movdqu %0, %%xmm2;"
+ :: "m"(zeros));
+}
+
#endif /* __KERNEL__ */
#endif /* _ASM_X86_SPECIAL_INSNS_H */
--
2.34.1
Powered by blists - more mailing lists