[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5801d8ea32a633abbfb8fb59380ec957caa03229.camel@intel.com>
Date: Thu, 7 Sep 2023 22:34:49 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "Christopherson,, Sean" <seanjc@...gle.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Zhao, Yan Y" <yan.y.zhao@...el.com>
Subject: Re: [PATCH 2/2] KVM: x86/mmu: Retry fault before acquiring mmu_lock
if mapping is changing
On Thu, 2023-09-07 at 07:45 -0700, Sean Christopherson wrote:
> On Wed, Sep 06, 2023, Kai Huang wrote:
> > On Thu, 2023-08-24 at 19:07 -0700, Sean Christopherson wrote:
> > > ---
> > > arch/x86/kvm/mmu/mmu.c | 3 +++
> > > 1 file changed, 3 insertions(+)
> > >
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index 1a5a1e7d1eb7..8e2e07ed1a1b 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -4334,6 +4334,9 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> > > if (unlikely(!fault->slot))
> > > return kvm_handle_noslot_fault(vcpu, fault, access);
> > >
> > > + if (mmu_invalidate_retry_hva(vcpu->kvm, fault->mmu_seq, fault->hva))
> > > + return RET_PF_RETRY;
> > > +
> >
> > ... Perhaps a comment saying this is to avoid unnecessary MMU lock contention
> > would be nice. Otherwise we have is_page_fault_stale() called later within the
> > MMU lock. I suppose people only tend to use git blamer when they cannot find
> > answer in the code :-)
>
> Agreed, will add.
>
> > > return RET_PF_CONTINUE;
> > > }
> > >
> >
> > Btw, currently fault->mmu_seq is set in kvm_faultin_pfn(), which happens after
> > fast_page_fault(). Conceptually, should we move this to even before
> > fast_page_fault() because I assume the range zapping should also apply to the
> > cases that fast_page_fault() handles?
>
> Nope, fast_page_fault() doesn't need to "manually" detect invalidated SPTEs because
> it only modifies shadow-present SPTEs and does so with an atomic CMPXCHG. If a
> SPTE is zapped by an mmu_notifier event (or anything else), the CMPXCHG will fail
> and fast_page_fault() will see the !PRESENT SPTE on the next retry and bail.
Ah yes. Thanks.
Powered by blists - more mailing lists