[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <d741b3a58769749b7873fea703c027a68b8e2e3d.1577462279.git.thomas.lendacky@amd.com>
Date: Fri, 27 Dec 2019 09:58:00 -0600
From: Tom Lendacky <thomas.lendacky@....com>
To: kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Brijesh Singh <brijesh.singh@....com>
Subject: [PATCH v2] KVM: SVM: Override default MMIO mask if memory encryption is enabled
The KVM MMIO support uses bit 51 as the reserved bit to cause nested page
faults when a guest performs MMIO. The AMD memory encryption support uses
a CPUID function to define the encryption bit position. Given this, it is
possible that these bits can conflict.
Use svm_hardware_setup() to override the MMIO mask if memory encryption
support is enabled. When memory encryption support is enabled the physical
address width is reduced and the first bit after the last valid reduced
physical address bit will always be reserved. Use this bit as the MMIO
mask.
Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
Suggested-by: Sean Christopherson <sean.j.christopherson@...el.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
---
arch/x86/kvm/svm.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 122d4ce3b1ab..2cb834b5982a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1361,6 +1361,32 @@ static __init int svm_hardware_setup(void)
}
}
+ /*
+ * The default MMIO mask is a single bit (excluding the present bit),
+ * which could conflict with the memory encryption bit. Check for
+ * memory encryption support and override the default MMIO masks if
+ * it is enabled.
+ */
+ if (cpuid_eax(0x80000000) >= 0x8000001f) {
+ u64 msr, mask;
+
+ rdmsrl(MSR_K8_SYSCFG, msr);
+ if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT) {
+ /*
+ * The physical addressing width is reduced. The first
+ * bit above the new physical addressing limit will
+ * always be reserved. Use this bit and the present bit
+ * to generate a page fault with PFER.RSV = 1.
+ */
+ mask = BIT_ULL(boot_cpu_data.x86_phys_bits);
+ mask |= BIT_ULL(0);
+
+ kvm_mmu_set_mmio_spte_mask(mask, mask,
+ PT_WRITABLE_MASK |
+ PT_USER_MASK);
+ }
+ }
+
for_each_possible_cpu(cpu) {
r = svm_cpu_init(cpu);
if (r)
--
2.17.1
Powered by blists - more mailing lists