[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121127234203.GC8295@amt.cnet>
Date: Tue, 27 Nov 2012 21:42:03 -0200
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Cc: Avi Kivity <avi@...hat.com>, LKML <linux-kernel@...r.kernel.org>,
KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH 3/3] KVM: x86: improve reexecute_instruction
On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote:
> On 11/27/2012 06:41 AM, Marcelo Tosatti wrote:
>
> >>
> >> - return false;
> >> +again:
> >> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count);
> >> +
> >> + /*
> >> + * if emulation was due to access to shadowed page table
> >> + * and it failed try to unshadow page and re-enter the
> >> + * guest to let CPU execute the instruction.
> >> + */
> >> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));
> >> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false);
> >
> > Can you explain what is the objective here?
> >
>
> Sure. :)
>
> The instruction emulation is caused by fault access on cr3. After unprotect
> the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3.
> if it return 1, mmu can not fix the mapping, we should report the error,
> otherwise it is good to return to guest and let it re-execute the instruction
> again.
>
> page_fault_count is used to avoid the race on other vcpus, since after we
> unprotect the target page, other cpu can enter page fault path and let the
> page be write-protected again.
>
> This way can help us to detect all the case that mmu can not be fixed.
How about recording the gfn number for shadow pages that have been
shadowed in the current pagefault run? (which is cheap, compared to
shadowing these pages).
If failed instruction emulation is write to one of these gfns, then
fail.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists