[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150831102944-mutt-send-email-mst@redhat.com>
Date: Mon, 31 Aug 2015 10:46:22 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Xiao Guangrong <guangrong.xiao@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH RFC 1/3] vmx: allow ioeventfd for EPT violations
On Mon, Aug 31, 2015 at 10:53:58AM +0800, Xiao Guangrong wrote:
>
>
> On 08/30/2015 05:12 PM, Michael S. Tsirkin wrote:
> >Even when we skip data decoding, MMIO is slightly slower
> >than port IO because it uses the page-tables, so the CPU
> >must do a pagewalk on each access.
> >
> >This overhead is normally masked by using the TLB cache:
> >but not so for KVM MMIO, where PTEs are marked as reserved
> >and so are never cached.
> >
> >As ioeventfd memory is never read, make it possible to use
> >RO pages on the host for ioeventfds, instead.
>
> I like this idea.
>
> >The result is that TLBs are cached, which finally makes MMIO
> >as fast as port IO.
>
> What does "TLBs are cached" mean? Even after applying the patch
> no new TLB type can be cached.
The Intel manual says:
No guest-physical mappings or combined mappings are created with
information derived from EPT paging-structure entries that are not present
(bits 2:0 are all 0) or that are misconfigured (see Section 28.2.3.1).
No combined mappings are created with information derived from guest
paging-structure entries that are not present or that set reserved bits.
Thus mappings that result in EPT violation are created, this makes
EPT violation preferable to EPT misconfiguration.
> >
> >Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
> >---
> > arch/x86/kvm/vmx.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> >diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> >index 9d1bfd3..ed44026 100644
> >--- a/arch/x86/kvm/vmx.c
> >+++ b/arch/x86/kvm/vmx.c
> >@@ -5745,6 +5745,11 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
> > vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI);
> >
> > gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
> >+ if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
> >+ skip_emulated_instruction(vcpu);
> >+ return 1;
> >+ }
> >+
>
> I am afraid that the common page fault entry point is not a good place to do the
> work.
Why isn't it?
> Would move it to kvm_handle_bad_page()? The different is the workload of
> fast_page_fault() is included but it's light enough and MMIO-exit should not be
> very frequent, so i think it's okay.
That was supposed to be a slow path, I doubt it'll work well without
major code restructuring.
IIUC by design everything that's not going through fast_page_fault
is supposed to be slow path that only happens rarely.
But in this case, the page stays read-only, we need a new fast path
through the code.
--
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists