[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200619153925.79106-11-mgamal@redhat.com>
Date: Fri, 19 Jun 2020 17:39:24 +0200
From: Mohammed Gamal <mgamal@...hat.com>
To: kvm@...r.kernel.org, pbonzini@...hat.com
Cc: linux-kernel@...r.kernel.org, vkuznets@...hat.com,
sean.j.christopherson@...el.com, wanpengli@...cent.com,
jmattson@...gle.com, joro@...tes.org, thomas.lendacky@....com,
babu.moger@....com, Mohammed Gamal <mgamal@...hat.com>
Subject: [PATCH v2 10/11] KVM: SVM: Add guest physical address check in NPF/PF interception
Check guest physical address against it's maximum physical memory. If
the guest's physical address exceeds the maximum (i.e. has reserved bits
set), inject a guest page fault with PFERR_RSVD_MASK set.
Similar ot VMX, this has to be done both in the NPF and page fault interceptions,
as there are complications in both cases with respect to the computation
of the correct error code.
For NPF interceptions, unfortunately the only possibility is to emulate,
because the access type in the exit qualification might refer to an
access to a paging structure, rather than to the access performed by
the program.
Trapping page faults instead is needed in order to correct the error code,
but the access type can be obtained from the original error code and
passed to gva_to_gpa. The corrections required in the error code are
subtle. For example, imagine that a PTE for a supervisor page has a reserved
bit set. On a supervisor-mode access, the EPT violation path would trigger.
However, on a user-mode access, the processor will not notice the reserved
bit and not include PFERR_RSVD_MASK in the error code.
CC: Tom Lendacky <thomas.lendacky@....com>
CC: Babu Moger <babu.moger@....com>
Signed-off-by: Mohammed Gamal <mgamal@...hat.com>
---
arch/x86/kvm/svm/svm.c | 11 +++++++++++
arch/x86/kvm/svm/svm.h | 2 +-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 05412818027d..ec3224a2e7c2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1702,6 +1702,12 @@ static int pf_interception(struct vcpu_svm *svm)
u64 fault_address = __sme_clr(svm->vmcb->control.exit_info_2);
u64 error_code = svm->vmcb->control.exit_info_1;
+ if (npt_enabled && !svm->vcpu.arch.apf.host_apf_flags) {
+ kvm_fixup_and_inject_pf_error(&svm->vcpu,
+ fault_address, error_code);
+ return 1;
+ }
+
return kvm_handle_page_fault(&svm->vcpu, error_code, fault_address,
static_cpu_has(X86_FEATURE_DECODEASSISTS) ?
svm->vmcb->control.insn_bytes : NULL,
@@ -1714,6 +1720,11 @@ static int npf_interception(struct vcpu_svm *svm)
u64 error_code = svm->vmcb->control.exit_info_1;
trace_kvm_page_fault(fault_address, error_code);
+
+ /* Check if guest gpa doesn't exceed physical memory limits */
+ if (unlikely(kvm_mmu_is_illegal_gpa(&svm->vcpu, fault_address)))
+ return kvm_emulate_instruction(&svm->vcpu, 0);
+
return kvm_mmu_page_fault(&svm->vcpu, fault_address, error_code,
static_cpu_has(X86_FEATURE_DECODEASSISTS) ?
svm->vmcb->control.insn_bytes : NULL,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 2b7469f3db0e..12b502e36dbd 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -348,7 +348,7 @@ static inline bool gif_set(struct vcpu_svm *svm)
static inline bool svm_need_pf_intercept(struct vcpu_svm *svm)
{
- return !npt_enabled;
+ return !npt_enabled || cpuid_maxphyaddr(&svm->vcpu) < boot_cpu_data.x86_phys_bits;
}
/* svm.c */
--
2.26.2
Powered by blists - more mailing lists