[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190912235603.18954-1-sean.j.christopherson@intel.com>
Date: Thu, 12 Sep 2019 16:56:03 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Fuqian Huang <huangfq.daxian@...il.com>
Subject: [PATCH] KVM: x86: Handle unexpected MMIO accesses using master abort semantics
Use master abort semantics, i.e. reads return all ones and writes are
dropped, to handle unexpected MMIO accesses when reading guest memory
instead of returning X86EMUL_IO_NEEDED, which in turn gets interpreted
as a guest page fault.
Emulation of certain instructions, notably VMX instructions, involves
reading or writing guest memory without going through the emulator.
These emulation flows are not equipped to handle MMIO accesses as no
sane and properly functioning guest kernel will target MMIO with such
instructions, and so simply inject a page fault in response to
X86EMUL_IO_NEEDED.
While not 100% correct, using master abort semantics is at least
sometimes correct, e.g. non-existent MMIO accesses do actually master
abort, whereas injecting a page fault is always wrong, i.e. the issue
lies in the physical address domain, not in the virtual to physical
translation.
Apply the logic to kvm_write_guest_virt_system() in addition to
replacing existing #PF logic in kvm_read_guest_virt(), as VMPTRST uses
the former, i.e. can also leak a host stack address.
Reported-by: Fuqian Huang <huangfq.daxian@...il.com>
Cc: stable@...r.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
---
arch/x86/kvm/x86.c | 40 +++++++++++++++++++++++++++++++---------
1 file changed, 31 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b4cfd786d0b6..d1d7e9fac17a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5234,16 +5234,24 @@ int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
struct x86_exception *exception)
{
u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
+ int r;
+
+ r = kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access,
+ exception);
/*
- * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED
- * is returned, but our callers are not ready for that and they blindly
- * call kvm_inject_page_fault. Ensure that they at least do not leak
- * uninitialized kernel stack memory into cr2 and error code.
+ * FIXME: this should technically call out to userspace to handle the
+ * MMIO access, but our callers are not ready for that, so emulate
+ * master abort behavior instead, i.e. writes are dropped.
*/
- memset(exception, 0, sizeof(*exception));
- return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access,
- exception);
+ if (r == X86EMUL_IO_NEEDED) {
+ memset(val, 0xff, bytes);
+ return 0;
+ }
+ if (r == X86EMUL_PROPAGATE_FAULT)
+ return -EFAULT;
+ WARN_ON_ONCE(r);
+ return 0;
}
EXPORT_SYMBOL_GPL(kvm_read_guest_virt);
@@ -5317,11 +5325,25 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val,
unsigned int bytes, struct x86_exception *exception)
{
+ int r;
+
/* kvm_write_guest_virt_system can pull in tons of pages. */
vcpu->arch.l1tf_flush_l1d = true;
- return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
- PFERR_WRITE_MASK, exception);
+ r = kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
+ PFERR_WRITE_MASK, exception);
+
+ /*
+ * FIXME: this should technically call out to userspace to handle the
+ * MMIO access, but our callers are not ready for that, so emulate
+ * master abort behavior instead, i.e. writes are dropped.
+ */
+ if (r == X86EMUL_IO_NEEDED)
+ return 0;
+ if (r == X86EMUL_PROPAGATE_FAULT)
+ return -EFAULT;
+ WARN_ON_ONCE(r);
+ return 0;
}
EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system);
--
2.22.0
Powered by blists - more mailing lists