lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <96dd3b45-90c1-4454-abe3-61987defd03b@redhat.com>
Date: Wed, 31 Jan 2024 12:58:04 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
 Zixi Chen <zixchen@...hat.com>, Xiaoyao Li <xiaoyao.li@...el.com>,
 Kai Huang <kai.huang@...ux.intel.com>,
 Dave Hansen <dave.hansen@...ux.intel.com>,
 Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...nel.org>,
 x86@...nel.org, stable@...r.kernel.org
Subject: Re: [PATCH] x86/cpu/intel: Detect TME keyid bits before setting MTRR
 mask registers

On 1/31/24 09:39, Kirill A . Shutemov wrote:
> On Tue, Jan 30, 2024 at 07:04:00PM +0100, Paolo Bonzini wrote:
>> MKTME repurposes the high bit of physical address to key id for encryption
>> key and, even though MAXPHYADDR in CPUID[0x80000008] remains the same,
>> the valid bits in the MTRR mask register are based on the reduced number
>> of physical address bits.
>>
>> detect_tme() in arch/x86/kernel/cpu/intel.c detects TME and subtracts
>> it from the total usable physical bits, but it is called too late.
>> Move the call to early_init_intel() so that it is called in setup_arch(),
>> before MTRRs are setup.
>>
>> This fixes boot on some TDX-enabled systems which until now only worked
>> with "disable_mtrr_cleanup".  Without the patch, the values written to
>> the MTRRs mask registers were 52-bit wide (e.g. 0x000fffff_80000800)
>> and the writes failed; with the patch, the values are 46-bit wide,
>> which matches the reduced MAXPHYADDR that is shown in /proc/cpuinfo.
>>
>> Fixes: cb06d8e3d020 ("x86/tme: Detect if TME and MKTME is activated by BIOS", 2018-03-12)
>> Reported-by: Zixi Chen <zixchen@...hat.com>
>> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
>> Cc: Xiaoyao Li <xiaoyao.li@...el.com>
>> Cc: Kai Huang <kai.huang@...ux.intel.com>
>> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
>> Cc: Thomas Gleixner <tglx@...utronix.de>
>> Cc: Ingo Molnar <mingo@...nel.org>
>> Cc: x86@...nel.org
>> Cc: stable@...r.kernel.org
>> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> 
> I've seen the patch before, although by different author and with
> different commit message, not sure what is going on.
> 
> I had concern about that patch and I don't think it was addressed.

Wow, slightly crazy that two people came up with exactly the same patch,
including adding the comment before the moved call.  And yes, this patch
only works until 6.6. :/

The commit that moved get_cpu_address_size(), which has sha id
fbf6449f84bf ("x86/sev-es: Set x86_virt_bits to the correct value
straight away, instead of a two-phase approach"), was buggy for AMD
processors; and it was noticed in the thread you linked, but never
addressed.  It works more or less by chance because early_init_amd()
calls init_amd(), but the x86_phys_bits value remains wrong for most of
the boot process.  The MTRRs are also initialized wrongly, but that at
least doesn't cause a #GP on AMD SME.

I think the correct fix is something like

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 0b97bcde70c6..fbc4e60d027c 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1589,6 +1589,7 @@ static void __init early_identify_cpu(
  		get_cpu_vendor(c);
  		get_cpu_cap(c);
  		setup_force_cpu_cap(X86_FEATURE_CPUID);
+		get_cpu_address_sizes(c);
  		cpu_parse_early_param();
  
  		if (this_cpu->c_early_init)
@@ -1601,10 +1602,9 @@ static void __init early_identify_cpu(
  			this_cpu->c_bsp_init(c);
  	} else {
  		setup_clear_cpu_cap(X86_FEATURE_CPUID);
+		get_cpu_address_sizes(c);
  	}
  
-	get_cpu_address_sizes(c);
-
  	setup_force_cpu_cap(X86_FEATURE_ALWAYS);
  
  	cpu_set_bug_bits(c);

on top of which my (or Jeremy's) patch can be applied.  I'll test it and
send a v2 of this patch.

Paolo

> See the thread:
> 
> https://lore.kernel.org/all/20231002224752.33qa2lq7q2w4nqws@box
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ