[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y+WZoXYvacqx/+Yu@google.com>
Date: Fri, 10 Feb 2023 01:10:57 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Lai Jiangshan <jiangshanlai@...il.com>
Cc: linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Lai Jiangshan <jiangshan.ljs@...group.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org
Subject: Re: [PATCH V2 6/8] kvm: x86/mmu: Remove FNAME(invlpg)
On Tue, Feb 07, 2023, Lai Jiangshan wrote:
> Use FNAME(sync_spte) to share the code which has a slight semantics
> changed: clean vTLB entry is kept.
...
> +static void __kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
> + gva_t gva, hpa_t root_hpa)
> +{
> + struct kvm_shadow_walk_iterator iterator;
> +
> + vcpu_clear_mmio_info(vcpu, gva);
> +
> + write_lock(&vcpu->kvm->mmu_lock);
> + for_each_shadow_entry_using_root(vcpu, root_hpa, gva, iterator) {
> + struct kvm_mmu_page *sp = sptep_to_sp(iterator.sptep);
> +
> + if (sp->unsync && *iterator.sptep) {
Please make the !0 change in a separate patch. It took me a while to connect the
dots, and to also understand what I suspect is a major motivation: sync_spte()
already has this check, i.e. the change is happening regardless, so might as well
avoid the indirect branch.
> + gfn_t gfn = kvm_mmu_page_get_gfn(sp, iterator.index);
> + int ret = mmu->sync_spte(vcpu, sp, iterator.index);
> +
> + if (ret < 0)
> + mmu_page_zap_pte(vcpu->kvm, sp, iterator.sptep, NULL);
> + if (ret)
> + kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1);
Why open code kvm_flush_remote_tlbs_sptep()? Does it actually shave enough
cycles to be visible?
If open coding is really justified, can you rebase on one of the two branches?
And then change this to kvm_flush_remote_tlbs_gfn().
https://github.com/kvm-x86/linux/tree/next
https://github.com/kvm-x86/linux/tree/mmu
Powered by blists - more mailing lists