[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YbPBy5yvAmPTjv+I@google.com>
Date: Fri, 10 Dec 2021 21:08:27 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Lai Jiangshan <laijs@...ux.alibaba.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
Xiao Guangrong <guangrong.xiao@...ux.intel.com>
Subject: Re: [PATCH 17/15] KVM: X86: Ensure pae_root to be reconstructed for
shadow paging if the guest PDPTEs is changed
On Fri, Dec 10, 2021, Sean Christopherson wrote:
> On Thu, Dec 09, 2021, Paolo Bonzini wrote:
> > On 12/8/21 01:15, Sean Christopherson wrote:
> > > > @@ -832,8 +832,14 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
> > > > if (memcmp(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs))) {
> > > > memcpy(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs));
> > > > kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR);
> > > > - /* Ensure the dirty PDPTEs to be loaded. */
> > > > - kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu);
> > > > + /*
> > > > + * Ensure the dirty PDPTEs to be loaded for VMX with EPT
> > > > + * enabled or pae_root to be reconstructed for shadow paging.
> > > > + */
> > > > + if (tdp_enabled)
> > > > + kvm_make_request(KVM_REQ_LOAD_MMU_PGD, vcpu);
> > > > + else
> > > > + kvm_mmu_free_roots(vcpu, vcpu->arch.mmu, KVM_MMU_ROOT_CURRENT);
> > > Shouldn't matter since it's legacy shadow paging, but @mmu should be used instead
> > > of vcpu->arch.mmuvcpu->arch.mmu.
> >
> > In kvm/next actually there's no mmu parameter to load_pdptrs, so it's okay
> > to keep vcpu->arch.mmu.
> >
> > > To avoid a dependency on the previous patch, I think it makes sense to have this be:
> > >
> > > if (!tdp_enabled && memcmp(mmu->pdptrs, pdpte, sizeof(mmu->pdptrs)))
> > > kvm_mmu_free_roots(vcpu, mmu, KVM_MMU_ROOT_CURRENT);
> > >
> > > before the memcpy().
> > >
> > > Then we can decide independently if skipping the KVM_REQ_LOAD_MMU_PGD if the
> > > PDPTRs are unchanged with respect to the MMU is safe.
> >
> > Do you disagree that there's already an invariant that the PDPTRs can only
> > be dirty if KVM_REQ_LOAD_MMU_PGD---and therefore a previous change to the
> > PDPTRs would have triggered KVM_REQ_LOAD_MMU_PGD?
>
> What I think is moot, because commit 24cd19a28cb7 ("KVM: X86: Update mmu->pdptrs
> only when it is changed") breaks nested VMs with EPT in L0 and PAE shadow paging
> in L2. Reproducing is trivial, just disable EPT in L1 and run a VM. I haven't
Doh, s/L2/L1
> investigating how it breaks things, because why it's broken is secondary for me.
>
> My primary concern is that we would even consider optimizing the PDPTR logic without
> a mountain of evidence that any patch is correct for all scenarios. We had to add
> an entire ioctl() just to get PDPTRs functional. This apparently wasn't validated
> against a simple use case, let alone against things like migration with nested VMs,
> multliple L2s, etc...
Powered by blists - more mailing lists