[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201007205552.GB2138@linux.intel.com>
Date: Wed, 7 Oct 2020 13:55:52 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Ben Gardon <bgardon@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Cannon Matthews <cannonmatthews@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>,
Peter Feiner <pfeiner@...gle.com>,
Junaid Shahid <junaids@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 10/22] kvm: mmu: Add TDP MMU PF handler
On Tue, Oct 06, 2020 at 03:33:21PM -0700, Ben Gardon wrote:
> On Wed, Sep 30, 2020 at 9:37 AM Sean Christopherson
> <sean.j.christopherson@...el.com> wrote:
> >
> > On Fri, Sep 25, 2020 at 02:22:50PM -0700, Ben Gardon wrote:
> > > @@ -4113,8 +4088,9 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
> > > if (page_fault_handle_page_track(vcpu, error_code, gfn))
> > > return RET_PF_EMULATE;
> > >
> > > - if (fast_page_fault(vcpu, gpa, error_code))
> > > - return RET_PF_RETRY;
> > > + if (!is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
> > > + if (fast_page_fault(vcpu, gpa, error_code))
> > > + return RET_PF_RETRY;
> >
> > It'll probably be easier to handle is_tdp_mmu() in fast_page_fault().
>
> I'd prefer to keep this check here because then in the fast page fault
> path, we can just handle the case where we do have a tdp mmu root with
> the tdp mmu fast pf handler and it'll mirror the split below with
> __direct_map and the TDP MMU PF handler.
Hmm, what about adding wrappers for these few cases where TDP MMU splits
cleanly from the existing paths? The thought being that it would keep the
control flow somewhat straightforward, and might also help us keep the two
paths aligned (more below).
> > >
> > > r = mmu_topup_memory_caches(vcpu, false);
> > > if (r)
> > > @@ -4139,8 +4115,14 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
> > > r = make_mmu_pages_available(vcpu);
> > > if (r)
> > > goto out_unlock;
> > > - r = __direct_map(vcpu, gpa, write, map_writable, max_level, pfn,
> > > - prefault, is_tdp && lpage_disallowed);
> > > +
> > > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
> > > + r = kvm_tdp_mmu_page_fault(vcpu, write, map_writable, max_level,
> > > + gpa, pfn, prefault,
> > > + is_tdp && lpage_disallowed);
> > > + else
> > > + r = __direct_map(vcpu, gpa, write, map_writable, max_level, pfn,
> > > + prefault, is_tdp && lpage_disallowed);
Somewhat tangetially related to the above, it feels like the TDP MMU helper
here would be better named tdp_mmu_map() or so. KVM has already done the
"fault" part, in that it has faulted in the page (if relevant) and obtained
a pfn. What's left is the actual insertion into the TDP page tables.
And again related to the helper, ideally tdp_mmu_map() and __direct_map()
would have identical prototypes. Ditto for the fast page fault paths. In
theory, that would allow the compiler to generate identical preamble, with
only the final check being different. And if the compiler isn't smart enough
to do that on its own, we might even make the wrapper non-inline, with an
"unlikely" annotation to coerce the compiler to generate a tail call for the
preferred path.
> > >
> > > out_unlock:
> > > spin_unlock(&vcpu->kvm->mmu_lock);
Powered by blists - more mailing lists