[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200106224931.GB12879@linux.intel.com>
Date: Mon, 6 Jan 2020 14:49:31 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Tom Lendacky <thomas.lendacky@....com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Brijesh Singh <brijesh.singh@....com>
Subject: Re: [PATCH v2] KVM: SVM: Override default MMIO mask if memory
encryption is enabled
On Fri, Dec 27, 2019 at 09:58:00AM -0600, Tom Lendacky wrote:
> The KVM MMIO support uses bit 51 as the reserved bit to cause nested page
> faults when a guest performs MMIO. The AMD memory encryption support uses
> a CPUID function to define the encryption bit position. Given this, it is
> possible that these bits can conflict.
>
> Use svm_hardware_setup() to override the MMIO mask if memory encryption
> support is enabled. When memory encryption support is enabled the physical
> address width is reduced and the first bit after the last valid reduced
> physical address bit will always be reserved. Use this bit as the MMIO
> mask.
>
> Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
> Suggested-by: Sean Christopherson <sean.j.christopherson@...el.com>
> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
> ---
> arch/x86/kvm/svm.c | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 122d4ce3b1ab..2cb834b5982a 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1361,6 +1361,32 @@ static __init int svm_hardware_setup(void)
> }
> }
>
> + /*
> + * The default MMIO mask is a single bit (excluding the present bit),
> + * which could conflict with the memory encryption bit. Check for
> + * memory encryption support and override the default MMIO masks if
> + * it is enabled.
> + */
> + if (cpuid_eax(0x80000000) >= 0x8000001f) {
> + u64 msr, mask;
> +
> + rdmsrl(MSR_K8_SYSCFG, msr);
> + if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT) {
> + /*
> + * The physical addressing width is reduced. The first
> + * bit above the new physical addressing limit will
> + * always be reserved. Use this bit and the present bit
> + * to generate a page fault with PFER.RSV = 1.
> + */
> + mask = BIT_ULL(boot_cpu_data.x86_phys_bits);
This doesn't handle the case where x86_phys_bits _isn't_ reduced by SME/SEV
on a future processor, i.e. x86_phys_bits==52.
After staring at things for a while, I think we can handle this issue with
minimal fuss by special casing MKTME in kvm_set_mmio_spte_mask(). I'll
send a patch, I have a related bug fix for kvm_set_mmio_spte_mask() that
touches the same code.
> + mask |= BIT_ULL(0);
> +
> + kvm_mmu_set_mmio_spte_mask(mask, mask,
> + PT_WRITABLE_MASK |
> + PT_USER_MASK);
> + }
> + }
> +
> for_each_possible_cpu(cpu) {
> r = svm_cpu_init(cpu);
> if (r)
> --
> 2.17.1
>
Powered by blists - more mailing lists