lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220211210255.9820-4-mario.limonciello@amd.com>
Date:   Fri, 11 Feb 2022 15:02:54 -0600
From:   Mario Limonciello <mario.limonciello@....com>
To:     Borislav Petkov <bp@...en8.de>,
        "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>
CC:     Kees Cook <keescook@...omium.org>,
        Thomas Lendacky <Thomas.Lendacky@....com>,
        <hughsient@...il.com>,
        Martin Fernandez <martin.fernandez@...ypsium.com>,
        <linux-kernel@...r.kernel.org>,
        "Brijesh Singh" <brijesh.singh@....com>,
        Mario Limonciello <mario.limonciello@....com>
Subject: [PATCH v2 3/4] x86/cpu: clear SME features when not in use

Currently the SME CPU feature flag is reflective of whether the CPU
supports the features but not whether they have been activated by the
kernel.

Change this around to clear the features if the kernel is not using
them so userspace can determine if they are available and in use
from `/proc/cpuinfo`.

As the feature flag is now cleared on systems that SME isn't active
use CPUID 0x8000001f to confirm SME availability before calling
`native_wbinvd`.

Signed-off-by: Mario Limonciello <mario.limonciello@....com>
---
A concern was raised whether the assembly generated from stop_this_cpu
is still safe.  Here is the assembly from compiling with this patch:

00000000000011d0 <stop_this_cpu>:
    11d0:       e8 00 00 00 00          call   11d5 <stop_this_cpu+0x5>
    11d5:       55                      push   %rbp
    11d6:       48 89 e5                mov    %rsp,%rbp
    11d9:       41 54                   push   %r12
    11db:       53                      push   %rbx
    11dc:       48 83 ec 18             sub    $0x18,%rsp
    11e0:       65 48 8b 04 25 28 00    mov    %gs:0x28,%rax
    11e7:       00 00
    11e9:       48 89 45 e8             mov    %rax,-0x18(%rbp)
    11ed:       31 c0                   xor    %eax,%eax
    11ef:       ff 14 25 00 00 00 00    call   *0x0
    11f6:       e8 00 00 00 00          call   11fb <stop_this_cpu+0x2b>
    11fb:       31 f6                   xor    %esi,%esi
    11fd:       48 c7 c3 00 00 00 00    mov    $0x0,%rbx
    1204:       89 c7                   mov    %eax,%edi
    1206:       e8 00 00 00 00          call   120b <stop_this_cpu+0x3b>
    120b:       e8 00 00 00 00          call   1210 <stop_this_cpu+0x40>
    1210:       e8 00 00 00 00          call   1215 <stop_this_cpu+0x45>
    1215:       41 89 c4                mov    %eax,%r12d
    1218:       3d ff 1f 00 00          cmp    $0x1fff,%eax
    121d:       77 49                   ja     1268 <stop_this_cpu+0x98>
    121f:       4a 03 1c e5 00 00 00    add    0x0(,%r12,8),%rbx
    1226:       00
    1227:       48 89 df                mov    %rbx,%rdi
    122a:       e8 00 00 00 00          call   122f <stop_this_cpu+0x5f>
    122f:       c7 45 d8 1f 00 00 80    movl   $0x8000001f,-0x28(%rbp)
    1236:       48 8d 7d d8             lea    -0x28(%rbp),%rdi
    123a:       48 8d 75 dc             lea    -0x24(%rbp),%rsi
    123e:       c7 45 e0 00 00 00 00    movl   $0x0,-0x20(%rbp)
    1245:       48 8d 55 e0             lea    -0x20(%rbp),%rdx
    1249:       48 8d 4d e4             lea    -0x1c(%rbp),%rcx
    124d:       ff 14 25 00 00 00 00    call   *0x0
    1254:       f6 45 d8 01             testb  $0x1,-0x28(%rbp)
    1258:       74 02                   je     125c <stop_this_cpu+0x8c>
    125a:       0f 09                   wbinvd
    125c:       eb 07                   jmp    1265 <stop_this_cpu+0x95>
    125e:       0f 00 2d 00 00 00 00    verw   0x0(%rip)        # 1265 <stop_this_cpu+0x95>
    1265:       f4                      hlt
    1266:       eb f4                   jmp    125c <stop_this_cpu+0x8c>
    1268:       4c 89 e6                mov    %r12,%rsi
    126b:       48 c7 c7 00 00 00 00    mov    $0x0,%rdi
    1272:       e8 00 00 00 00          call   1277 <stop_this_cpu+0xa7>
    1277:       eb a6                   jmp    121f <stop_this_cpu+0x4f>
    1279:       0f 1f 80 00 00 00 00    nopl   0x0(%rax) 

 arch/x86/kernel/cpu/amd.c | 3 +++
 arch/x86/kernel/process.c | 5 ++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 4edb6f0f628c..9cea774344cb 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -607,6 +607,9 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
 		if (IS_ENABLED(CONFIG_X86_32))
 			goto clear_all;
 
+		if (!sme_me_mask)
+			goto clear_all;
+
 		rdmsrl(MSR_K7_HWCR, msr);
 		if (!(msr & MSR_K7_HWCR_SMMLOCK))
 			goto clear_sev;
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 81d8ef036637..e131d71b3cae 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -765,8 +765,11 @@ void stop_this_cpu(void *dummy)
 	 * without the encryption bit, they don't race each other when flushed
 	 * and potentially end up with the wrong entry being committed to
 	 * memory.
+	 *
+	 * Test the CPUID bit directly because the machine might've cleared
+	 * X86_FEATURE_SME due to cmdline options.
 	 */
-	if (boot_cpu_has(X86_FEATURE_SME))
+	if (cpuid_eax(0x8000001f) & BIT(0))
 		native_wbinvd();
 	for (;;) {
 		/*
-- 
2.34.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ