[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1440925898-23440-3-git-send-email-mst@redhat.com>
Date: Sun, 30 Aug 2015 12:12:47 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>
Subject: [PATCH RFC 2/3] svm: allow ioeventfd for NPT page faults
MMIO is slightly slower than port IO because it uses the page-tables, so
the CPU must do a pagewalk on each access.
This overhead is normally masked by using the TLB cache:
but not so for KVM MMIO, where PTEs are marked as reserved
and so are never cached.
As ioeventfd memory is never read, make it possible to use
RO pages on the host for ioeventfds, instead.
The result is that TLBs are cached, which finally makes MMIO
as fast as port IO.
Warning: untested.
Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
---
arch/x86/kvm/svm.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8e0c084..6422fac 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1812,6 +1812,11 @@ static int pf_interception(struct vcpu_svm *svm)
switch (svm->apf_reason) {
default:
error_code = svm->vmcb->control.exit_info_1;
+ if (!kvm_io_bus_write(&svm->vcpu, KVM_FAST_MMIO_BUS,
+ fault_address, 0, NULL)) {
+ skip_emulated_instruction(&svm->vcpu);
+ return 1;
+ }
trace_kvm_page_fault(fault_address, error_code);
if (!npt_enabled && kvm_event_needs_reinjection(&svm->vcpu))
--
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists