[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a7423qwr.fsf@vitty.brq.redhat.com>
Date: Fri, 27 Mar 2020 13:48:04 +0100
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Junaid Shahid <junaids@...gle.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH 2/3] KVM: x86: cleanup kvm_inject_emulated_page_fault
Paolo Bonzini <pbonzini@...hat.com> writes:
> On 26/03/20 14:41, Vitaly Kuznetsov wrote:
>> Paolo Bonzini <pbonzini@...hat.com> writes:
>>
>>> To reconstruct the kvm_mmu to be used for page fault injection, we
>>> can simply use fault->nested_page_fault. This matches how
>>> fault->nested_page_fault is assigned in the first place by
>>> FNAME(walk_addr_generic).
>>>
>>> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
>>> ---
>>> arch/x86/kvm/mmu/mmu.c | 6 ------
>>> arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
>>> arch/x86/kvm/x86.c | 7 +++----
>>> 3 files changed, 4 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>>> index e26c9a583e75..6250e31ac617 100644
>>> --- a/arch/x86/kvm/mmu/mmu.c
>>> +++ b/arch/x86/kvm/mmu/mmu.c
>>> @@ -4353,12 +4353,6 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu)
>>> return kvm_read_cr3(vcpu);
>>> }
>>>
>>> -static void inject_page_fault(struct kvm_vcpu *vcpu,
>>> - struct x86_exception *fault)
>>> -{
>>> - vcpu->arch.mmu->inject_page_fault(vcpu, fault);
>>> -}
>>> -
>>
>> This is already gone with Sean's "KVM: x86: Consolidate logic for
>> injecting page faults to L1".
>>
>> It would probably make sense to have a combined series (or a branch on
>> kvm.git) to simplify testing efforts.
>
> Yes, these three patches replace part of Sean's (the patch you mention
> and the next one, "KVM: x86: Sync SPTEs when injecting page/EPT fault
> into L1").
>
> I pushed the result to a branch named kvm-tlb-cleanup on kvm.git.
>
Thank you,
I've tested it with Hyper-V on both VMX and SVM with and without PV TLB
flush and nothing immediately blew up. I'm also observing a very nice
19000 -> 14000 cycles improvement on tight cpuid loop test (with EVMCS
enabled).
--
Vitaly
Powered by blists - more mailing lists