[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YK6E5NnmRpYYDMTA@google.com>
Date: Wed, 26 May 2021 17:27:00 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Pu Wen <puwen@...on.cn>
Cc: x86@...nel.org, joro@...tes.org, thomas.lendacky@....com,
dave.hansen@...ux.intel.com, peterz@...radead.org,
tglx@...utronix.de, mingo@...hat.com, bp@...e.de, hpa@...or.com,
jroedel@...e.de, sashal@...nel.org, gregkh@...uxfoundation.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH] x86/sev: Check whether SEV or SME is supported first
On Wed, May 26, 2021, Pu Wen wrote:
> The first two bits of the CPUID leaf 0x8000001F EAX indicate whether
> SEV or SME is supported respectively. It's better to check whether
> SEV or SME is supported before checking the SEV MSR(0xc0010131) to
> see whether SEV or SME is enabled.
>
> This also avoid the MSR reading failure on the first generation Hygon
> Dhyana CPU which does not support SEV or SME.
>
> Fixes: eab696d8e8b9 ("x86/sev: Do not require Hypervisor CPUID bit for SEV guests")
> Cc: <stable@...r.kernel.org> # v5.10+
> Signed-off-by: Pu Wen <puwen@...on.cn>
> ---
> arch/x86/mm/mem_encrypt_identity.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
> index a9639f663d25..470b20208430 100644
> --- a/arch/x86/mm/mem_encrypt_identity.c
> +++ b/arch/x86/mm/mem_encrypt_identity.c
> @@ -504,10 +504,6 @@ void __init sme_enable(struct boot_params *bp)
> #define AMD_SME_BIT BIT(0)
> #define AMD_SEV_BIT BIT(1)
>
> - /* Check the SEV MSR whether SEV or SME is enabled */
> - sev_status = __rdmsr(MSR_AMD64_SEV);
> - feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
> -
> /*
> * Check for the SME/SEV feature:
> * CPUID Fn8000_001F[EAX]
> @@ -519,11 +515,16 @@ void __init sme_enable(struct boot_params *bp)
> eax = 0x8000001f;
> ecx = 0;
> native_cpuid(&eax, &ebx, &ecx, &edx);
> - if (!(eax & feature_mask))
> + /* Check whether SEV or SME is supported */
> + if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT)))
Hmm, checking CPUID at all before MSR_AMD64_SEV is flawed for SEV, e.g. the VMM
doesn't need to pass-through CPUID to attack the guest, it can lie directly.
SEV-ES is protected by virtue of CPUID interception being reflected as #VC, which
effectively tells the guest that it's (probably) an SEV-ES guest and also gives
the guest the opportunity to sanity check the emulated CPUID values provided by
the VMM.
In other words, this patch is flawed, but commit eab696d8e8b9 was also flawed by
conditioning the SEV path on CPUID.0x80000000.
Given that #VC can be handled cleanly, the kernel should be able to handle a #GP
at this point. So I think the proper fix is to change __rdmsr() to
native_read_msr_safe(), or an open coded variant if necessary, and drop the CPUID
checks for SEV.
The other alternative is to admit that the VMM is still trusted for SEV guests
and take this patch as is (with a reworded changelog). This probably has my
vote, I don't see much value in pretending that the VMM can't exfiltrate data
from an SEV guest. In fact, a malicious VMM is probably more likely to get
access to interesting data by _not_ lying about SEV being enabled, because lying
about SEV itself will hose the guest sooner than later.
> return;
>
> me_mask = 1UL << (ebx & 0x3f);
>
> + /* Check the SEV MSR whether SEV or SME is enabled */
> + sev_status = __rdmsr(MSR_AMD64_SEV);
> + feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
> +
> /* Check if memory encryption is enabled */
> if (feature_mask == AMD_SME_BIT) {
> /*
> --
> 2.23.0
>
Powered by blists - more mailing lists