[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110620161432.GA17130@amt.cnet>
Date: Mon, 20 Jun 2011 13:14:32 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
Cc: Avi Kivity <avi@...hat.com>, LKML <linux-kernel@...r.kernel.org>,
KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH 04/15] KVM: MMU: cache mmio info on page fault path
On Tue, Jun 07, 2011 at 09:00:30PM +0800, Xiao Guangrong wrote:
> If the page fault is caused by mmio, we can cache the mmio info, later, we do
> not need to walk guest page table and quickly know it is a mmio fault while we
> emulate the mmio instruction
>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
> ---
> arch/x86/include/asm/kvm_host.h | 5 +++
> arch/x86/kvm/mmu.c | 21 +++++----------
> arch/x86/kvm/mmu.h | 23 +++++++++++++++++
> arch/x86/kvm/paging_tmpl.h | 21 ++++++++++-----
> arch/x86/kvm/x86.c | 52 ++++++++++++++++++++++++++++++--------
> arch/x86/kvm/x86.h | 36 +++++++++++++++++++++++++++
> 6 files changed, 126 insertions(+), 32 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index d167039..326af42 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -414,6 +414,11 @@ struct kvm_vcpu_arch {
> u64 mcg_ctl;
> u64 *mce_banks;
>
> + /* Cache MMIO info */
> + u64 mmio_gva;
> + unsigned access;
> + gfn_t mmio_gfn;
> +
> /* used for guest single stepping over the given code position */
> unsigned long singlestep_rip;
>
Why you're not implementing the original idea to cache the MMIO
attribute of an address into the spte?
That solution is wider reaching than a one-entry cache, and was proposed
to overcome large number of memslots.
If the access pattern switches between different addresses this one
solution is doomed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists