[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZdLOjuCP2pDjhsJl@yzhao56-desk.sh.intel.com>
Date: Mon, 19 Feb 2024 11:44:14 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Paolo Bonzini <pbonzini@...hat.com>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, Friedrich Weber <f.weber@...xmox.com>, "Kai
Huang" <kai.huang@...el.com>, Yuan Yao <yuan.yao@...ux.intel.com>, Xu Yilun
<yilun.xu@...ux.intel.com>, Yu Zhang <yu.c.zhang@...ux.intel.com>, Chao Peng
<chao.p.peng@...ux.intel.com>, Fuad Tabba <tabba@...gle.com>, Michael Roth
<michael.roth@....com>, Isaku Yamahata <isaku.yamahata@...el.com>, "David
Matlack" <dmatlack@...gle.com>
Subject: Re: [PATCH v4 3/4] KVM: x86/mmu: Move slot checks from
__kvm_faultin_pfn() to kvm_faultin_pfn()
On Fri, Feb 09, 2024 at 02:28:57PM -0800, Sean Christopherson wrote:
> Move the checks related to the validity of an access to a memslot from the
> inner __kvm_faultin_pfn() to its sole caller, kvm_faultin_pfn(). This
> allows emulating accesses to the APIC access page, which don't need to
> resolve a pfn, even if there is a relevant in-progress mmu_notifier
> invalidation. Ditto for accesses to KVM internal memslots from L2, which
> KVM also treats as emulated MMIO.
>
> More importantly, this will allow for future cleanup by having the
> "no memslot" case bail from kvm_faultin_pfn() very early on.
>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 62 ++++++++++++++++++++++--------------------
> 1 file changed, 33 insertions(+), 29 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 50bfaa53f3f2..505fc7eef533 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -4333,33 +4333,6 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> struct kvm_memory_slot *slot = fault->slot;
> bool async;
>
> - /*
> - * Retry the page fault if the gfn hit a memslot that is being deleted
> - * or moved. This ensures any existing SPTEs for the old memslot will
> - * be zapped before KVM inserts a new MMIO SPTE for the gfn.
> - */
> - if (slot && (slot->flags & KVM_MEMSLOT_INVALID))
> - return RET_PF_RETRY;
> -
> - if (!kvm_is_visible_memslot(slot)) {
> - /* Don't expose private memslots to L2. */
> - if (is_guest_mode(vcpu)) {
> - fault->slot = NULL;
> - fault->pfn = KVM_PFN_NOSLOT;
> - fault->map_writable = false;
> - return RET_PF_CONTINUE;
> - }
> - /*
> - * If the APIC access page exists but is disabled, go directly
> - * to emulation without caching the MMIO access or creating a
> - * MMIO SPTE. That way the cache doesn't need to be purged
> - * when the AVIC is re-enabled.
> - */
> - if (slot && slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT &&
> - !kvm_apicv_activated(vcpu->kvm))
> - return RET_PF_EMULATE;
> - }
> -
> if (fault->is_private)
> return kvm_faultin_pfn_private(vcpu, fault);
>
> @@ -4406,6 +4379,37 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> fault->mmu_seq = vcpu->kvm->mmu_invalidate_seq;
> smp_rmb();
>
> + if (!slot)
> + goto faultin_pfn;
> +
> + /*
> + * Retry the page fault if the gfn hit a memslot that is being deleted
> + * or moved. This ensures any existing SPTEs for the old memslot will
> + * be zapped before KVM inserts a new MMIO SPTE for the gfn.
> + */
> + if (slot->flags & KVM_MEMSLOT_INVALID)
> + return RET_PF_RETRY;
> +
> + if (!kvm_is_visible_memslot(slot)) {
> + /* Don't expose KVM's internal memslots to L2. */
> + if (is_guest_mode(vcpu)) {
> + fault->slot = NULL;
> + fault->pfn = KVM_PFN_NOSLOT;
> + fault->map_writable = false;
> + return RET_PF_CONTINUE;
Call kvm_handle_noslot_fault() to replace returning RET_PF_CONTINUE?
> + }
> +
> + /*
> + * If the APIC access page exists but is disabled, go directly
> + * to emulation without caching the MMIO access or creating a
> + * MMIO SPTE. That way the cache doesn't need to be purged
> + * when the AVIC is re-enabled.
> + */
> + if (slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT &&
> + !kvm_apicv_activated(vcpu->kvm))
> + return RET_PF_EMULATE;
> + }
> +
> /*
> * Check for a relevant mmu_notifier invalidation event before getting
> * the pfn from the primary MMU, and before acquiring mmu_lock.
> @@ -4427,10 +4431,10 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> * *guaranteed* to need to retry, i.e. waiting until mmu_lock is held
> * to detect retry guarantees the worst case latency for the vCPU.
> */
> - if (!slot &&
> - mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn))
> + if (mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn))
> return RET_PF_RETRY;
>
> +faultin_pfn:
> ret = __kvm_faultin_pfn(vcpu, fault);
> if (ret != RET_PF_CONTINUE)
> return ret;
> --
> 2.43.0.687.g38aa6559b0-goog
>
Powered by blists - more mailing lists