[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZM18AAFj21Fo36hg@google.com>
Date: Fri, 4 Aug 2023 15:30:24 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Yu Zhang <yu.c.zhang@...ux.intel.com>
Cc: David Stevens <stevensd@...omium.org>,
Marc Zyngier <maz@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>,
Peter Xu <peterx@...hat.com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
kvm@...r.kernel.org
Subject: Re: [PATCH v7 4/8] KVM: x86/mmu: Migrate to __kvm_follow_pfn
On Wed, Jul 05, 2023, Yu Zhang wrote:
> On Tue, Jul 04, 2023 at 04:50:49PM +0900, David Stevens wrote:
> > From: David Stevens <stevensd@...omium.org>
> >
> > Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn.
Please turn up your changelog verbosity from ~2 to ~8. E.g. explain the transition
from async => FOLL_NOWAIT+KVM_PFN_ERR_NEEDS_IO, there's no reason to force readers
to suss that out on their own.
> > Signed-off-by: David Stevens <stevensd@...omium.org>
> > ---
> > arch/x86/kvm/mmu/mmu.c | 35 +++++++++++++++++++++++++----------
> > 1 file changed, 25 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index ec169f5c7dce..e44ab512c3a1 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -4296,7 +4296,12 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
> > static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> > {
> > struct kvm_memory_slot *slot = fault->slot;
> > - bool async;
> > + struct kvm_follow_pfn foll = {
> > + .slot = slot,
> > + .gfn = fault->gfn,
> > + .flags = FOLL_GET | (fault->write ? FOLL_WRITE : 0),
> > + .allow_write_mapping = true,
> > + };
> >
> > /*
> > * Retry the page fault if the gfn hit a memslot that is being deleted
> > @@ -4325,12 +4330,14 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> > return RET_PF_EMULATE;
> > }
> >
> > - async = false;
> > - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async,
> > - fault->write, &fault->map_writable,
> > - &fault->hva);
> > - if (!async)
> > - return RET_PF_CONTINUE; /* *pfn has correct page already */
> > + foll.flags |= FOLL_NOWAIT;
> > + fault->pfn = __kvm_follow_pfn(&foll);
> > +
> > + if (!is_error_noslot_pfn(fault->pfn))
> > + goto success;
> > +
> > + if (fault->pfn != KVM_PFN_ERR_NEEDS_IO)
> > + return RET_PF_CONTINUE;
>
> IIUC, FOLL_NOWAIT is set only when we wanna an async fault. So
> KVM_PFN_ERR_NEEDS_IO may not be necessary?
But FOLL_NOWAIT is set above. This logic is essentially saying "bail immediately
if __gfn_to_pfn_memslot() returned a fatal error".
A commented would definitely be helpful though. How about?
/*
* If __kvm_follow_pfn() failed because I/O is needed to fault in the
* page, then either set up an asynchronous #PF to do the I/O, or if
* doing an async #PF isn't possible, retry __kvm_follow_pfn() with
I/O allowed. All other failures are fatal, i.e. retrying won't help.
*/
if (fault->pfn != KVM_PFN_ERR_NEEDS_IO)
return RET_PF_CONTINUE;
Powered by blists - more mailing lists